We introduce a neural implicit framework that exploits the differentiable properties of neural networks and the discrete geometry of point-sampled surfaces to approximate them as the level sets of neural implicit functions. To train a neural implicit function, we propose a loss functional that approximates a signed distance function, and allows terms with high-order derivatives, such as the alignment between the principal directions of curvature, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the curvatures of the point-sampled surface to prioritize points with more geometric details. This sampling implies faster learning while preserving geometric accuracy when compared with previous approaches. We also use the analytical derivatives of a neural implicit function to estimate the differential measures of the underlying point-sampled surface.
翻译:我们引入一个神经隐含框架,利用神经网络的不同特性和点抽样表面的离异几何学,将它们作为神经隐含功能的水平组合加以近似。为了培训神经隐含功能,我们提议了一种损失功能,该功能接近一个签名的距离功能,并允许与高序衍生物(如曲线主要方向之间的对齐)进行条件,以了解更多的几何细节。在培训过程中,我们考虑基于点抽样表面的曲度的非统一抽样战略,以具有更多几何细节的方式确定各点的优先次序。这种抽样意味着更快地学习,同时保持与以往方法相比的几何准确性。我们还利用神经隐含函数的分析衍生物来估计底点抽样表面的差别测量值。