Neural radiance fields (NeRFs) have recently emerged as a promising approach for 3D reconstruction and novel view synthesis. However, NeRF-based methods encode shape, reflectance, and illumination implicitly and this makes it challenging for users to manipulate these properties in the rendered images explicitly. Existing approaches only enable limited editing of the scene and deformation of the geometry. Furthermore, no existing work enables accurate scene illumination after object deformation. In this work, we introduce SPIDR, a new hybrid neural SDF representation. SPIDR combines point cloud and neural implicit representations to enable the reconstruction of higher quality object surfaces for geometry deformation and lighting estimation. meshes and surfaces for object deformation and lighting estimation. To more accurately capture environment illumination for scene relighting, we propose a novel neural implicit model to learn environment light. To enable more accurate illumination updates after deformation, we use the shadow mapping technique to approximate the light visibility updates caused by geometry editing. We demonstrate the effectiveness of SPIDR in enabling high quality geometry editing with more accurate updates to the illumination of the scene.
翻译:神经辐射场(NeRFs)近来作为3D重建和新视角合成的有希望的方法出现。然而,基于NeRF的方法隐式地编码了形状、反射和光照,这使得用户难以显式地操纵这些属性。现有方法仅能够对场景进行有限的编辑和几何形状的变形。此外,目前还没有工作能够在物体变形之后实现准确的场景照明。在本文中,我们引入了SPIDR,一种新的混合神经SDF表示法。SPIDR将点云与神经隐式表示法相结合,可以重建更高质量的物体表面,以进行几何变形和光照估计。为了更准确地捕捉环境光照以进行场景重照,我们提出了一种新的神经隐式模型来学习环境光照。为了在变形后进行更准确的光照更新,我们使用阴影映射技术来近似几何编辑造成的光可见性更新。我们展示了SPIDR在实现高质量几何编辑并对场景照明进行更准确的更新方面的有效性。