In this work, we present I$^2$-SDF, a new method for intrinsic indoor scene reconstruction and editing using differentiable Monte Carlo raytracing on neural signed distance fields (SDFs). Our holistic neural SDF-based framework jointly recovers the underlying shapes, incident radiance and materials from multi-view images. We introduce a novel bubble loss for fine-grained small objects and error-guided adaptive sampling scheme to largely improve the reconstruction quality on large-scale indoor scenes. Further, we propose to decompose the neural radiance field into spatially-varying material of the scene as a neural field through surface-based, differentiable Monte Carlo raytracing and emitter semantic segmentations, which enables physically based and photorealistic scene relighting and editing applications. Through a number of qualitative and quantitative experiments, we demonstrate the superior quality of our method on indoor scene reconstruction, novel view synthesis, and scene editing compared to state-of-the-art baselines.
翻译:在这项工作中,我们展示了I$2$-SDF,这是使用对神经标志的距离场进行不同的Monte Carlo射线观测的新室内现场重建和编辑方法。我们基于整体神经SDF的框架共同从多视图图像中恢复了基本形状、事件亮度和材料。我们引入了精细微粒小天体的新泡沫损失和错误引导的适应性取样计划,以在很大程度上提高大型室内景点的重建质量。此外,我们提议将神经光谱场分解为空间变化的场景材料,作为神经场,通过基于地表的、可区分的Monte Carlo射线和闪光性语分层,使物理和摄影现实的场亮光和编辑应用得以实现。我们通过一些定性和定量实验,展示了我们室内场重建方法的优劣质量,新视角合成,以及与最新基线相比,现场编辑的优劣质量。</s>