Decomposing a scene into its shape, reflectance, and illumination is a challenging but important problem in computer vision and graphics. This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination. Though recent work has shown that implicit representations can be used to model the radiance field of an object, most of these techniques only enable view synthesis and not relighting. Additionally, evaluating these radiance fields is resource and time-intensive. We propose a neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties. In contrast to existing techniques, our input images can be captured under different illumination conditions. In addition, we also propose techniques to convert the learned reflectance volume into a relightable textured mesh enabling fast real-time rendering with novel illuminations. We demonstrate the potential of the proposed approach with experiments on both synthetic and real datasets, where we are able to obtain high-quality relightable 3D assets from image collections. The datasets and code is available on the project page: https://markboss.me/publication/2021-nerd/
翻译:将场景分解成其形状、反射和光化是计算机视觉和图形中一个具有挑战性但又很重要的问题。当光化不是实验室条件下的单一光源,而是实验室条件下的单一光源,而是不受限制的环境光化时,这一问题本身就更具挑战性。虽然最近的工作表明,可以使用隐含的表示方式模拟一个物体的光亮场,但大多数这些技术只能进行视觉合成而不是光化。此外,评价这些光亮场是资源和时间密集型的。我们提议采用一种神经反射分解技术(NERD),利用物理显示法将场分解成空间上不同的BRDF材料特性。与现有技术不同,我们输入图像可以在不同的光化条件下拍摄。此外,我们还提议了将所学反射量转换成可快速实时利用新光化的线片模集的工艺。我们展示了在合成和真实数据集上进行实验的潜力,我们可以从图像集集中获取高质量的3D资产。