We introduce a neural implicit framework that exploits the differentiable properties of neural networks and the discrete geometry of point-sampled surfaces to approximate them as the level sets of neural implicit functions. To train a neural implicit function, we propose a loss functional that approximates a signed distance function, and allows terms with high-order derivatives, such as the alignment between the principal directions of curvature, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the curvatures of the point-sampled surface to prioritize points with more geometric details. This sampling implies faster learning while preserving geometric accuracy when compared with previous approaches. We also present the analytical differential geometry formulas for neural surfaces, such as normal vectors and curvatures.
翻译:我们引入一个神经隐含框架,利用神经网络的不同特性和点抽样表面的离异几何学,把它们作为神经隐含功能的水平组合进行近似。为了培训神经隐含功能,我们提议了一种损失功能,该功能近似一个签名的距离功能,并允许与高序衍生物(如曲线主要方向之间的对齐)进行条件,以了解更多的几何细节。在培训过程中,我们考虑基于点抽样表面的曲度的非统一抽样战略,以具有更多几何细节的点为优先位置。这种抽样意味着在与以往方法相比保持几何精确性的同时,学习得更快。我们还提出了神经表面(如正常矢量和曲度)的分析性差几何公式。