Neural radiance fields (NeRFs) have recently emerged as a promising approach for 3D reconstruction and novel view synthesis. However, NeRF-based methods encode shape, reflectance, and illumination implicitly and this makes it challenging for users to manipulate these properties in the rendered images explicitly. Existing approaches only enable limited editing of the scene and deformation of the geometry. Furthermore, no existing work enables accurate scene illumination after object deformation. In this work, we introduce SPIDR, a new hybrid neural SDF representation. SPIDR combines point cloud and neural implicit representations to enable the reconstruction of higher quality object surfaces for geometry deformation and lighting estimation. meshes and surfaces for object deformation and lighting estimation. To more accurately capture environment illumination for scene relighting, we propose a novel neural implicit model to learn environment light. To enable more accurate illumination updates after deformation, we use the shadow mapping technique to approximate the light visibility updates caused by geometry editing. We demonstrate the effectiveness of SPIDR in enabling high quality geometry editing with more accurate updates to the illumination of the scene.
翻译:最近,NeRF法将形状、反射和光照隐含地编码,这使用户难以在成像中明确操作这些属性。现有办法只能对场景进行有限的编辑和改变几何。此外,没有一项现有工作能够在物体变形后进行准确的场景光化。在这项工作中,我们引入了一个新的混合神经系统SDF代表的SPIDR。SPID将点云和神经隐含表示结合起来,以便能够重建质量更高的物体表面,以进行几何变形和照明估计。Meshes和表面用于物体变形和照明估计。为了更准确地捕捉环境照明,我们提出了一个新的内线性隐含模型,以学习环境光。为了在变形后能够更准确地更新电光,我们使用影子绘图技术来估计由几何学编辑引起的光亮度更新。我们展示了SPID法在使高质量的几何测得得更精确地进行编辑方面的效力。