Recent neural rendering methods have demonstrated accurate view interpolation by predicting volumetric density and color with a neural network. Although such volumetric representations can be supervised on static and dynamic scenes, existing methods implicitly bake the complete scene light transport into a single neural network for a given scene, including surface modeling, bidirectional scattering distribution functions, and indirect lighting effects. In contrast to traditional rendering pipelines, this prohibits changing surface reflectance, illumination, or composing other objects in the scene. In this work, we explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene. The proposed method allows BSDF recovery with unknown light conditions and classic light transports such as pathtracing. By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition. The method outperforms NeRV for relighting under known lighting conditions, and produces realistic reconstructions for relit and edited scenes. We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
翻译:最近的神经转换方法通过预测神经网络的体积密度和颜色,显示了准确的视觉内插。虽然这种体积表示可以在静态和动态场景上加以监督,但现有方法含蓄地将全场光传输浓缩成特定场景的单一神经网络,包括表面建模、双向散射分布功能和间接照明效应。与传统铺设管道不同,这种方法禁止改变表面反射、照明或将现场的其他物体混合在一起。在这项工作中,我们明确模拟现场表面表面之间的光飘移,并依靠传统的集成办法和模拟方程式来重建场景。拟议方法允许BSDF在未知的光条件下进行回收,并采用典型的光质传输方式,例如 " 路透 " 。通过学习以传统制成的表面显示方式进行分解的运输,这种方法自然便利了编辑形状、反射、照明和场景构成。这种方法在已知的照明条件下,超越了NERV,为重新光化和编辑场景提供了现实的重建。我们验证了从合成和摄像中获取的数据中得出的场面编辑、照明和反射估计方法。