We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images. To model the illumination of a scene, existing inverse rendering works either completely ignore the indirect illumination or model it by coarse approximations, leading to sub-optimal illumination, geometry, and material prediction of the scene. In this work, we propose a physics-based illumination model that explicitly traces the incoming indirect lights at each surface point based on interreflection, followed by estimating each identified indirect light through an efficient neural network. Furthermore, we utilize the Leibniz's integral rule to resolve non-differentiability in the proposed illumination model caused by one type of environment light -- the tangent lights. As a result, the proposed interreflection-aware illumination model can be learned end-to-end together with geometry and materials estimation. As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed method performs favorably against existing inverse rendering methods on novel view synthesis and inverse rendering.
翻译:我们提出了一个基于物理的反演法,从多视图 RGB 图像中学习一个场景的光照、几何和材料。为了模拟场景的光照,现有的反演法要么完全忽略间接光照,要么通过粗略近光线进行模拟,导致亚最佳光照、几何和对场景的物质预测。在这项工作中,我们提出了一个基于物理的光照模型,明确跟踪每个地表点的间接光照,以相互反射为基础,然后通过高效的神经网络对每个已查明的间接光照进行估计。此外,我们利用莱布尼兹的集成规则来解决由一种环境光造成的拟议光照模型中的无区别性,即光光线。结果,拟议的内分辨光光学模型可以与地球测量和材料估计一起学习端对端。作为副产品,我们基于物理的反演化模型还有助于灵活和现实地进行材料编辑,并通过一个高效的神经网络进行照明。此外,我们利用莱布尼兹的集成的集成式实验,在一种合成和真实的合成方法上展示新的合成方法。