Given a set of images of a scene, the re-rendering of this scene from novel views and lighting conditions is an important and challenging problem in Computer Vision and Graphics. On the one hand, most existing works in Computer Vision usually impose many assumptions regarding the image formation process, e.g. direct illumination and predefined materials, to make scene parameter estimation tractable. On the other hand, mature Computer Graphics tools allow modeling of complex photo-realistic light transport given all the scene parameters. Combining these approaches, we propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function, which implicitly handles global illumination effects using novel environment maps. Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition. To disambiguate the task during training, we tightly integrate a differentiable path tracer in the training process and propose a combination of a synthesized OLAT and a real image loss. Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art and, thus, also our re-rendering results are more realistic and accurate.
翻译:根据一组场景的图像,从新视角和照明条件下重现这一场景是计算机视觉和图形中一个重要和具有挑战性的问题。一方面,计算机视觉中的大多数现有作品通常对图像形成过程施加许多假设,例如直接照明和预设材料,使现场参数估计可以进行。另一方面,成熟的计算机图形工具允许根据所有场景参数来模拟复杂的摄影现实光传输。结合这些方法,我们提出了一个在新视角下重新点燃场景的方法,方法是学习神经预设的光亮转移功能,该功能含蓄地使用新的环境图处理全球照明效应。我们的方法只能仅以单一未知的照明条件下的场景真实图像来监督。为了在培训过程中分离任务,我们严格地将一个不同的路径追踪器纳入培训过程,并提出一个综合的 OLAT 和真实图像损失的组合。结果显示,所恢复的场景参数的分解会大大改善艺术的现状,因此,我们的再精确和重新显示的结果更加现实。