We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.
翻译:我们提出了一个新的神经表面重建方法,称为Neus,用于从 2D 图像输入中以高度忠贞的方式重建物体和场景。现有的神经表面重建方法,如DVR和IDR,需要作为监督的表面面罩,容易被困在当地的微型中,因此与重建具有严重自我封闭或薄体结构的物体进行斗争。与此同时,最近的新视觉合成神经方法,如NeRF及其变体等,使用体积转换方法来产生一个具有精密优化性、甚至高度复杂的物体的神经场景代表。然而,从这一已学会的隐含代表中提取高质量表面代表是困难的,因为代表中没有足够的表面限制。在Neus中,我们提议以表面表面表面面面面面面面罩为零层面罩,并开发一个新的体积转换方法来训练神经SDFDF代表。我们观察到,传统体积的合成方法在地表重建过程中造成了内在的几何误(即偏差),因此提出了一种新的配方,没有偏差,因为从这个已知的物体的初近顺序中得出更精确的表面结构,因此导致更精确的地面重建,甚至没有高质量的地面结构的地面结构。