In recent years, neural distance functions trained via volumetric ray marching have been widely adopted for multi-view 3D reconstruction. These methods, however, apply the ray marching procedure for the entire scene volume, leading to reduced sampling efficiency and, as a result, lower reconstruction quality in the areas of high-frequency details. In this work, we address this problem via joint training of the implicit function and our new coarse sphere-based surface reconstruction. We use the coarse representation to efficiently exclude the empty volume of the scene from the volumetric ray marching procedure without additional forward passes of the neural surface network, which leads to an increased fidelity of the reconstructions compared to the base systems. We evaluate our approach by incorporating it into the training procedures of several implicit surface modeling methods and observe uniform improvements across both synthetic and real-world datasets. Our codebase can be accessed via the project page: https://andreeadogaru.github.io/SphereGuided
翻译:近年来,通过体积光线追踪训练的神经距离函数已被广泛应用于多视角3D重建。然而,这些方法将光线追踪过程应用于整个场景体积,导致在高频细节区域减少了采样效率,并因此降低了重建质量。在这项工作中,我们通过隐式函数的联合训练和我们的新的粗略基于球体的表面重建来解决这个问题。我们使用粗略的表现方法来有效地排除场景体积的空白部分,而不需要神经表面网络的额外前向传递,从而提高了重建的保真度,相比基础系统我们看到了一致的改善。我们通过将其纳入几种隐式表面建模方法的训练过程来评估我们的方法,并观察到了在合成和真实世界数据集上的统一改进。我们的代码库可以通过项目页面访问:https://andreeadogaru.github.io/SphereGuided。