We present a neural rendering framework for simultaneous view synthesis and appearance editing of a scene from multi-view images captured under known environment illumination. Existing approaches either achieve view synthesis alone or view synthesis along with relighting, without direct control over the scene's appearance. Our approach explicitly disentangles the appearance and learns a lighting representation that is independent of it. Specifically, we independently estimate the BRDF and use it to learn a lighting-only representation of the scene. Such disentanglement allows our approach to generalize to arbitrary changes in appearance while performing view synthesis. We show results of editing the appearance of a real scene, demonstrating that our approach produces plausible appearance editing. The performance of our view synthesis approach is demonstrated to be at par with state-of-the-art approaches on both real and synthetic data.
翻译:我们提出了一个神经合成框架,用于对在已知环境光照下拍摄的多视图图像的场景进行同步视图合成和外观编辑; 现有方法要么单独实现视图合成, 要么在光化的同时进行合成, 而没有直接控制场景的外观; 我们的方法明确分解外观, 并学习独立于外观的光度表示法; 具体地说, 我们独立地估计了BRDF, 并用它来学习光化的场景表示法; 这种分解使我们得以在进行视觉合成时对外观的任意变化进行概括化。 我们展示了对真实场景的编辑结果, 表明我们的方法产生了貌似合理的外观编辑。 我们的视觉合成方法的表现与真实和合成数据的最新方法一样。