In this work, we present I$^2$-SDF, a new method for intrinsic indoor scene reconstruction and editing using differentiable Monte Carlo raytracing on neural signed distance fields (SDFs). Our holistic neural SDF-based framework jointly recovers the underlying shapes, incident radiance and materials from multi-view images. We introduce a novel bubble loss for fine-grained small objects and error-guided adaptive sampling scheme to largely improve the reconstruction quality on large-scale indoor scenes. Further, we propose to decompose the neural radiance field into spatially-varying material of the scene as a neural field through surface-based, differentiable Monte Carlo raytracing and emitter semantic segmentations, which enables physically based and photorealistic scene relighting and editing applications. Through a number of qualitative and quantitative experiments, we demonstrate the superior quality of our method on indoor scene reconstruction, novel view synthesis, and scene editing compared to state-of-the-art baselines.
翻译:在本文中,我们提出了 I$^2$-SDF,这是一种使用神经无符号距离场(SDF)上不可区分 Monte Carlo 射线追踪进行内在室内场景的重建和编辑的新方法。我们的整体神经 SDF 为基础的框架联合恢复多视图图像中的底层形状、射入辐射和材质。我们介绍了一种新的泡泡损失,用于精细的小物体,以及误差引导的自适应采样方案,大大提高了大规模室内场景的重建质量。此外,我们建议将神经辐射场分解为神经场中的空间变化材料,通过基于表面的、可区别 Monte Carlo 射线追踪和发射器语义分割,实现基于物理和逼真的场景重照和编辑应用。通过大量的定性和定量实验,我们证明了我们的方法在室内场景重建、新视角合成和场景编辑方面比现有最先进的基线方法具有更高的质量。