Recently, neural implicit surfaces learning by volume rendering has become popular for multi-view reconstruction. However, one key challenge remains: existing approaches lack explicit multi-view geometry constraints, hence usually fail to generate geometry consistent surface reconstruction. To address this challenge, we propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction. We theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling. To bridge this gap, we directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi-view stereo. This makes our SDF optimization unbiased and allows the multi-view geometry constraints to focus on the true surface optimization. Extensive experiments show that our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions, thus outperforming the state-of-the-arts by a large margin.
翻译:最近,通过体积转换进行神经隐含表面学习已成为多视角重建的流行之处。然而,一个关键挑战依然存在:现有方法缺乏明确的多视角几何限制,因此通常无法产生几何一致的表面重建。为了应对这一挑战,我们提议为多视角重建进行几何一致神经隐含表面学习。我们从理论上分析成集成和点基签名远程功能模型的体积之间存在差距。为了缩小这一差距,我们直接定位了SDF网络的零级数据集,并通过利用从运动(SFM)和多视角立体体体体体结构中生成的稀少几何特征来明确进行多视角几何优化。这使得我们的SDF的优化是公正的,使得多视角的几何限制能够集中于真正的表面优化。广泛的实验表明,我们拟议的方法在复杂的薄度结构和大平坦区域都实现了高质量的地表重建,从而大大超越了状态。