We introduce a neural implicit framework that bridges discrete differential geometry of triangle meshes and continuous differential geometry of neural implicit surfaces. It exploits the differentiable properties of neural networks and the discrete geometry of triangle meshes to approximate them as the zero-level sets of neural implicit functions. To train a neural implicit function, we propose a loss function that allows terms with high-order derivatives, such as the alignment between the principal directions, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the discrete curvatures of the triangle mesh to access points with more geometric details. This sampling implies faster learning while preserving geometric accuracy. We present the analytical differential geometry formulas for neural surfaces, such as normal vectors and curvatures. We use them to render the surfaces using sphere tracing. Additionally, we propose a network optimization based on singular value decomposition to reduce the number of parameters.
翻译:我们引入一个神经隐含框架, 连接三角间隔的离异几何和神经隐含表面的连续不同几何; 利用神经网络的不同特性和三角间隔的离异几何, 把它们相近为神经隐含功能的零层。 为了训练神经隐含功能, 我们提议一个损失函数, 允许使用高序衍生物的条件, 如主要方向之间的对齐, 学习更多的几何细节。 在训练期间, 我们考虑一个非统一抽样战略, 其基础是三角网间离异的弯曲到具有更多几何细节的接入点。 这种抽样意味着在保存几何精确性的同时更快地学习。 我们为神经表面( 如正常的矢量和曲度) 展示分析的差别几何公式。 我们用它们来利用球轨迹使表面变形。 此外, 我们提议一个基于单值分解位置的网络优化策略, 以减少参数数目 。