Neural implicit fields have recently emerged as a useful representation for 3D shapes. These fields are commonly represented as neural networks which map latent descriptors and 3D coordinates to implicit function values. The latent descriptor of a neural field acts as a deformation handle for the 3D shape it represents. Thus, smoothness with respect to this descriptor is paramount for performing shape-editing operations. In this work, we introduce a novel regularization designed to encourage smooth latent spaces in neural fields by penalizing the upper bound on the field's Lipschitz constant. Compared with prior Lipschitz regularized networks, ours is computationally fast, can be implemented in four lines of code, and requires minimal hyperparameter tuning for geometric applications. We demonstrate the effectiveness of our approach on shape interpolation and extrapolation as well as partial shape reconstruction from 3D point clouds, showing both qualitative and quantitative improvements over existing state-of-the-art and non-regularized baselines.
翻译:最近,神经隐含域成为3D形状的有用表示。 这些领域通常被作为神经网络, 绘制隐性描述符和隐性功能值的3D坐标。 神经场的潜在描述符作为3D形状的变形处理器。 因此, 该描述符的光滑性对于进行形状编辑操作至关重要。 在这项工作中, 我们引入了一种新的正规化, 目的是通过惩罚现场Lipschitz常态的上界, 来鼓励神经场的平稳潜伏空间。 与先前的 Lipschitz 常规化网络相比, 我们的系统可以快速进行计算, 可以在四行代码中执行, 并且要求对几何应用进行最低限度的超参数调整。 我们展示了我们从 3D 点云中进行形状内插和外插法以及部分形状重建的方法的有效性, 这表明了现有状态和非常规基线在质量和数量上都有改进。