Neural radiance fields, or NeRF, represent a breakthrough in the field of novel view synthesis and 3D modeling of complex scenes from multi-view image collections. Numerous recent works have been focusing on making the models more robust, by means of regularization, so as to be able to train with possibly inconsistent and/or very sparse data. In this work, we scratch the surface of how differential geometry can provide regularization tools for robustly training NeRF-like models, which are modified so as to represent continuous and infinitely differentiable functions. In particular, we show how these tools yield a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions (i.e. RegNeRF). Based on this, we show how the same formalism can be used to natively encourage the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it possible, for example, to learn surfaces from a very limited number of views.
翻译:神经光亮场,即NeRF,代表了对多视图图像收藏的复杂场景进行新视角合成和3D建模领域的突破。许多近期工作一直侧重于通过正规化使模型更加稳健,从而能够以可能不一致和(或)非常稀少的数据进行培训。在这项工作中,我们分辨了差异几何如何为强力培训类似NeRF的模型提供规范化工具的表面,这些模型经过修改,以代表连续和无限不同的功能。特别是,我们展示了这些工具如何产生一种直接的数学形式化的先前提议的NeRF变体,目的是在挑战性条件下(即RegNERF)改进性能。 在此基础上,我们展示了如何用同样的形式来鼓励表层(通过高斯和中度曲线)的规律性,从而有可能从非常有限的观点中学习表面。