Solving the challenging problem of 3D object reconstruction from a single image appropriately gives existing technologies the ability to perform with a single monocular camera rather than requiring depth sensors. In recent years, thanks to the development of deep learning, 3D reconstruction of a single image has demonstrated impressive progress. Existing researches use Chamfer distance as a loss function to guide the training of the neural network. However, the Chamfer loss will give equal weights to all points inside the 3D point clouds. It tends to sacrifice fine-grained and thin structures to avoid incurring a high loss, which will lead to visually unsatisfactory results. This paper proposes a framework that can recover a detailed three-dimensional point cloud from a single image by focusing more on boundaries (edge and corner points). Experimental results demonstrate that the proposed method outperforms existing techniques significantly, both qualitatively and quantitatively, and has fewer training parameters.
翻译:从单一图像中解决三维天体重建这一具有挑战性的问题,使现有技术有能力使用单一的单镜照相机而不是需要深度传感器。近年来,由于深层学习的发展,三维重建单一图像的工作取得了令人印象深刻的进展。现有研究利用钱费尔距离作为损失函数来指导神经网络的培训。然而,查弗损失将使3D点云中的所有点具有同等的重量。它倾向于牺牲精细和薄薄的结构以避免造成高损失,从而导致目视不尽然的结果。本文提出了一个框架,通过更加注重边界(前沿和角点)从单一图像中回收详细的三维点云。实验结果显示,拟议的方法在质量和数量上都大大优于现有技术,培训参数也较少。