Recovering dense 3D point clouds from single endoscopic image
Authors: Xi, L., Zhao, Y., Chen, L., Gao, Q.H., Tang, W., Wan, T.R. and Xue, T.
Journal: Computer Methods and Programs in Biomedicine
Volume: 205
eISSN: 1872-7565
ISSN: 0169-2607
DOI: 10.1016/j.cmpb.2021.106077
Abstract:Background and objective: Recovering high-quality 3D point clouds from monocular endoscopic images is a challenging task. This paper proposes a novel deep learning-based computational framework for 3D point cloud reconstruction from single monocular endoscopic images. Methods: An unsupervised mono-depth learning network is used to generate depth information from monocular images. Given a single mono endoscopic image, the network is capable of depicting a depth map. The depth map is then used to recover a dense 3D point cloud. A generative Endo-AE network based on an auto-encoder is trained to repair defects of the dense point cloud by generating the best representation from the incomplete data. The performance of the proposed framework is evaluated against state-of-the-art learning-based methods. The results are also compared with non-learning based stereo 3D reconstruction algorithms. Results: Our proposed methods outperform both the state-of-the-art learning-based and non-learning based methods for 3D point cloud reconstruction. The Endo-AE model for point cloud completion can generate high-quality, dense 3D endoscopic point clouds from incomplete point clouds with holes. Our framework is able to recover complete 3D point clouds with the missing rate of information up to 60%. Five large medical in-vivo databases of 3D point clouds of real endoscopic scenes have been generated and two synthetic 3D medical datasets are created. We have made these datasets publicly available for researchers free of charge. Conclusions: The proposed computational framework can produce high-quality and dense 3D point clouds from single mono-endoscopy images for augmented reality, virtual reality and other computer-mediated medical applications.
https://eprints.bournemouth.ac.uk/35445/
Source: Scopus
Recovering dense 3D point clouds from single endoscopic image.
Authors: Xi, L., Zhao, Y., Chen, L., Gao, Q.H., Tang, W., Wan, T.R. and Xue, T.
Journal: Comput Methods Programs Biomed
Volume: 205
Pages: 106077
eISSN: 1872-7565
DOI: 10.1016/j.cmpb.2021.106077
Abstract:BACKGROUND AND OBJECTIVE: Recovering high-quality 3D point clouds from monocular endoscopic images is a challenging task. This paper proposes a novel deep learning-based computational framework for 3D point cloud reconstruction from single monocular endoscopic images. METHODS: An unsupervised mono-depth learning network is used to generate depth information from monocular images. Given a single mono endoscopic image, the network is capable of depicting a depth map. The depth map is then used to recover a dense 3D point cloud. A generative Endo-AE network based on an auto-encoder is trained to repair defects of the dense point cloud by generating the best representation from the incomplete data. The performance of the proposed framework is evaluated against state-of-the-art learning-based methods. The results are also compared with non-learning based stereo 3D reconstruction algorithms. RESULTS: Our proposed methods outperform both the state-of-the-art learning-based and non-learning based methods for 3D point cloud reconstruction. The Endo-AE model for point cloud completion can generate high-quality, dense 3D endoscopic point clouds from incomplete point clouds with holes. Our framework is able to recover complete 3D point clouds with the missing rate of information up to 60%. Five large medical in-vivo databases of 3D point clouds of real endoscopic scenes have been generated and two synthetic 3D medical datasets are created. We have made these datasets publicly available for researchers free of charge. CONCLUSIONS: The proposed computational framework can produce high-quality and dense 3D point clouds from single mono-endoscopy images for augmented reality, virtual reality and other computer-mediated medical applications.
https://eprints.bournemouth.ac.uk/35445/
Source: PubMed
Recovering dense 3D point clouds from single endoscopic image
Authors: Xi, L., Zhao, Y., Chen, L., Gao, Q.H., Tang, W., Wan, T.R. and Xue, T.
Journal: COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
Volume: 205
eISSN: 1872-7565
ISSN: 0169-2607
DOI: 10.1016/j.cmpb.2021.106077
https://eprints.bournemouth.ac.uk/35445/
Source: Web of Science (Lite)
Recovering dense 3D point clouds from a single endoscopic image
Authors: Tang, W., Wan, T.R., Xue, T., Xi, L., Zhao, Y., Chen, L. and Gao, Q.H.
Journal: Computer Methods and Programs in Biomedicine
Volume: 205
Issue: June 2021, 109077
Publisher: Elsevier
ISSN: 0169-2607
DOI: 10.1016/j.cmpb.2021.106077
Abstract:Background and objective: Recovering high-quality 3D point clouds from monocular endoscopic images is a challenging task. This paper proposes a novel deep learning-based computational framework for 3D point cloud reconstruction from single monocular endoscopic images.
Methods: An unsupervised mono-depth learning network is used to generate depth information from monocular images. Given a single mono endoscopic image, the network is capable of depicting a depth map. The depth map is then used to recover a dense 3D point cloud. A generative Endo-AE network based on an auto-encoder is trained to repair defects of the dense point cloud by generating the best representation from the incomplete data. The performance of the proposed framework is evaluated against state-of-the-art learning-based methods. The results are also compared with non-learning based stereo 3D reconstruction algorithms.
Results: Our proposed methods outperform both the state-of-the-art learning-based and non-learning based methods for 3D point cloud reconstruction. The Endo-AE model for point cloud completion can generate high-quality, dense 3D endoscopic point clouds from incomplete point clouds with holes. Our framework is able to recover complete 3D point clouds with the missing rate of information up to 60%. Five large medical in-vivo databases of 3D point clouds of real endoscopic scenes have been generated and two synthetic 3D medical datasets are created. We have made these datasets publicly available for researchers free of charge.
Conclusions: The proposed computational framework can produce high-quality and dense 3D point clouds from single mono-endoscopy images for augmented reality, virtual reality and other computer-mediated medical applications.
https://eprints.bournemouth.ac.uk/35445/
Source: Manual
Recovering dense 3D point clouds from single endoscopic image.
Authors: Xi, L., Zhao, Y., Chen, L., Gao, Q.H., Tang, W., Wan, T.R. and Xue, T.
Journal: Computer methods and programs in biomedicine
Volume: 205
Pages: 106077
eISSN: 1872-7565
ISSN: 0169-2607
DOI: 10.1016/j.cmpb.2021.106077
Abstract:Background and objective
Recovering high-quality 3D point clouds from monocular endoscopic images is a challenging task. This paper proposes a novel deep learning-based computational framework for 3D point cloud reconstruction from single monocular endoscopic images.Methods
An unsupervised mono-depth learning network is used to generate depth information from monocular images. Given a single mono endoscopic image, the network is capable of depicting a depth map. The depth map is then used to recover a dense 3D point cloud. A generative Endo-AE network based on an auto-encoder is trained to repair defects of the dense point cloud by generating the best representation from the incomplete data. The performance of the proposed framework is evaluated against state-of-the-art learning-based methods. The results are also compared with non-learning based stereo 3D reconstruction algorithms.Results
Our proposed methods outperform both the state-of-the-art learning-based and non-learning based methods for 3D point cloud reconstruction. The Endo-AE model for point cloud completion can generate high-quality, dense 3D endoscopic point clouds from incomplete point clouds with holes. Our framework is able to recover complete 3D point clouds with the missing rate of information up to 60%. Five large medical in-vivo databases of 3D point clouds of real endoscopic scenes have been generated and two synthetic 3D medical datasets are created. We have made these datasets publicly available for researchers free of charge.Conclusions
The proposed computational framework can produce high-quality and dense 3D point clouds from single mono-endoscopy images for augmented reality, virtual reality and other computer-mediated medical applications.https://eprints.bournemouth.ac.uk/35445/
Source: Europe PubMed Central
Recovering dense 3D point clouds from a single endoscopic image.
Authors: Long, X., Zhao, Y., Chen, L., Gao, Q.H., Tang, W., Wan, T.R. and Xue, T.
Journal: Computer Methods and Programs in Biomedicine
Volume: 205
ISSN: 0169-2607
Abstract:Background and objective: Recovering high-quality 3D point clouds from monocular endoscopic images is a challenging task. This paper proposes a novel deep learning-based computational framework for 3D point cloud reconstruction from single monocular endoscopic images. Methods: An unsupervised mono-depth learning network is used to generate depth information from monocular images. Given a single mono endoscopic image, the network is capable of depicting a depth map. The depth map is then used to recover a dense 3D point cloud. A generative Endo-AE network based on an auto-encoder is trained to repair defects of the dense point cloud by generating the best representation from the incomplete data. The performance of the proposed framework is evaluated against state-of-the-art learning-based methods. The results are also compared with non-learning based stereo 3D reconstruction algorithms. Results: Our proposed methods outperform both the state-of-the-art learning-based and non-learning based methods for 3D point cloud reconstruction. The Endo-AE model for point cloud completion can generate high-quality, dense 3D endoscopic point clouds from incomplete point clouds with holes. Our framework is able to recover complete 3D point clouds with the missing rate of information up to 60%. Five large medical in-vivo databases of 3D point clouds of real endoscopic scenes have been generated and two synthetic 3D medical datasets are created. We have made these datasets publicly available for researchers free of charge. Conclusions: The proposed computational framework can produce high-quality and dense 3D point clouds from single mono-endoscopy images for augmented reality, virtual reality and other computer-mediated medical applications.
https://eprints.bournemouth.ac.uk/35445/
Source: BURO EPrints