We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks. This is an extremely challenging problem that requires modeling complex light transport, and disentangling HDR lighting from material and geometry with only a partial LDR observation of the scene. We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions. We use physically-based indoor light representations that allow for intuitive editing, and infer both visible and invisible light sources. Our neural rendering framework combines physically-based direct illumination and shadow rendering with deep networks to approximate global illumination. It can capture challenging lighting effects, such as soft shadows, directional lighting, specular materials, and interreflections. Previous single image inverse rendering methods usually entangle scene lighting and geometry and only support applications like object insertion. Instead, by combining parametric 3D lighting estimation with neural scene rendering, we demonstrate the first automatic method to achieve full scene relighting, including light source insertion, removal, and replacement, from a single image. All source code and data will be publicly released.
翻译:我们提出了一个从单一图像及其预测深度和光源截面面面面来编辑室内复合照明的方法。这是一个极具挑战性的问题,需要模拟复杂的光传输,并将《人类发展报告》的照明从材料和几何上分离出来,只是部分LDR对现场进行部分观测。我们用两个新的组成部分来解决这个问题:1)一个综合现场重建方法,对场景反射和3D光光度进行估计,2)一个神经透光框架,从我们的预测中重新反应场景。我们使用基于物理的室内光度显示,以便进行直视编辑,并推断可见和无形的光源。我们的神经合成框架将基于物理的直接照明和阴影与深度网络相结合,以近似全球照明。它可以捕捉具有挑战性的照明效应,如软影影、方向照明、光源、光谱材料和干涉等。以前的单一图像反向转换方法通常会将场景光光光光和地球测量与物体插入等应用相。相反,我们通过将3D光度估计与神经场面显示相结合,我们展示了第一个自动方法,以便实现整个场面光光源的全场光源,包括光源的更新。