With the introduction of Neural Radiance Fields (NeRFs), novel view synthesis has recently made a big leap forward. At the core, NeRF proposes that each 3D point can emit radiance, allowing to conduct view synthesis using differentiable volumetric rendering. While neural radiance fields can accurately represent 3D scenes for computing the image rendering, 3D meshes are still the main scene representation supported by most computer graphics and simulation pipelines, enabling tasks such as real time rendering and physics-based simulations. Obtaining 3D meshes from neural radiance fields still remains an open challenge since NeRFs are optimized for view synthesis, not enforcing an accurate underlying geometry on the radiance field. We thus propose a novel compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach. Upon having trained the radiance field, we distill the volumetric 3D representation into a Signed Surface Approximation Network, allowing easy extraction of the 3D mesh and appearance. Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
翻译:随着引入神经辐射场(NeRFs),新观点合成最近取得了巨大的飞跃。在核心,NeRF建议每个 3D 点都可以释放弧度,允许使用不同的体积转换进行合成。虽然神经弧度场可以准确地代表3D图像制作的场景,但3D 光谱场仍然是大多数计算机图形和模拟管道所支持的主要场景表现,使实时成像和物理模拟等任务成为可能。从神经光谱场获取 3D 片仍是一个公开的挑战,因为NeRFs是最佳的视觉合成,而不是在光谱场上执行精确的基本几何方法。因此,我们提出了一个新的缩略式和灵活结构,使得能够从任何NeRF驱动的方法中方便地进行3D表面重建。在对光谱场进行培训后,我们将体积3D 代表处的3D 插进一个已签名的地表应用网络,便于提取 3D 和外观。我们最后的3D 3D 图像是物理精确的,可以实时在一系列设备上完成。</s>