Recent works on implicit neural representations have made significant strides. Learning implicit neural surfaces using volume rendering has gained popularity in multi-view reconstruction without 3D supervision. However, accurately recovering fine details is still challenging, due to the underlying ambiguity of geometry and appearance representation. In this paper, we present D-NeuS, a volume rendering-base neural implicit surface reconstruction method capable to recover fine geometry details, which extends NeuS by two additional loss functions targeting enhanced reconstruction quality. First, we encourage the rendered surface points from alpha compositing to have zero signed distance values, alleviating the geometry bias arising from transforming SDF to density for volume rendering. Second, we impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays. Extensive quantitative and qualitative results demonstrate that our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
翻译:最近关于隐含神经表示的工程取得了显著进步。 学习用体积表示的隐含神经表面在多视重建中已经获得流行,没有3D监督。 但是,准确恢复细微细节仍然具有挑战性,因为几何和外观代表的含混度。 在本文中,我们介绍了D-NeuS, 一种能恢复精细的神经表层表示的体积基础神经隐含重建方法,它通过两个额外的损失功能将Neus延伸至提高重建质量。 首先,我们鼓励从阿尔法组合得出的表面点具有零签名的距离值,减轻从将SDFDF转化为体积表示密度所产生的几何偏差。 其次,我们对地表点实行多视特征一致性,这是从射线沿线抽样点的SDFDF零交叉中得出的。 广泛的定量和定性结果表明,我们的方法用细节重建高精度表面,超越了艺术状态。