With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.
翻译:随着深层神经网络的出现,基于学习的3D重建方法越来越受欢迎。然而,与图像不同的是,在3D中,没有计算和记忆效率高、但能够代表任意地形的高分辨率几何的3D重建方法。许多最先进的基于学习的3D重建方法因此只能代表非常粗糙的3D结构,或者局限于一个有限的领域。在本文件中,我们提议了基于学习的3D重建方法的新的占用网络。占用网络隐含地将3D表面作为深线神经网络分类者的连续决定边界。与现有方法不同,我们的代表将3D产出的无穷解编码成一个无穷无尽的描述,而没有过多的记忆足迹。我们确认我们的代表性能够有效地编码3D结构,并且可以从各种投入中推断出来。我们的实验在质量和数量上展示了3D重建具有挑战性的结果,从单一图像、热点云和粗密的离散的3D电网。我们认为,占用网络将成为一种有用的工具,用于广泛的基于3D的学习任务。