Neural fields have revolutionized the area of 3D reconstruction and novel view synthesis of rigid scenes. A key challenge in making such methods applicable to articulated objects, such as the human body, is to model the deformation of 3D locations between the rest pose (a canonical space) and the deformed space. We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space via iterative root finding. Fast-SNARF is a drop-in replacement in functionality to our previous work, SNARF, while significantly improving its computational efficiency. We contribute several algorithmic and implementation improvements over SNARF, yielding a speed-up of $150\times$. These improvements include voxel-based correspondence search, pre-computing the linear blend skinning function, and an efficient software implementation with CUDA kernels. Fast-SNARF enables efficient and simultaneous optimization of shape and skinning weights given deformed observations without correspondences (e.g. 3D meshes). Because learning of deformation maps is a crucial component in many 3D human avatar methods and since Fast-SNARF provides a computationally efficient solution, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
翻译:3D 重建领域和硬场景的新视觉合成领域已经革命了3D 重建领域和硬场景的新视觉合成领域。在使这种方法适用于像人体这样的分解物体方面,一个关键的挑战是如何模拟3D位置在其余表面(一个运河空间)和变形空间之间的变形。我们提出了一个新的神经场的表达模块,即快速SNARF,该模块通过迭接根查寻发现峡谷空间和空间之间的准确对应。快速SNARF是取代我们先前的工作(SNARF)的功能,同时大大提高其计算效率。我们为SNARF提供了数种算法和实施改进,使SNAFR加速了150美元的时间。这些改进包括基于Voxel的通信搜索、对线性混合皮质功能进行预先校准,以及与CUDA内核站的高效软件执行。快速SNAF使得在没有对应的变形观测(例如3D meshes)的情况下,能够高效和同步地优化形状和皮肤重量。由于学习变形地图是许多3D 3D人类虚拟化解决方案中的关键组成部分,我们相信了虚拟化的虚拟化方法和快速的创建。