Learning neural radiance fields of a scene has recently allowed realistic novel view synthesis of the scene, but they are limited to synthesize images under the original fixed lighting condition. Therefore, they are not flexible for the eagerly desired tasks like relighting, scene editing and scene composition. To tackle this problem, several recent methods propose to disentangle reflectance and illumination from the radiance field. These methods can cope with solid objects with opaque surfaces but participating media are neglected. Also, they take into account only direct illumination or at most one-bounce indirect illumination, thus suffer from energy loss due to ignoring the high-order indirect illumination. We propose to learn neural representations for participating media with a complete simulation of global illumination. We estimate direct illumination via ray tracing and compute indirect illumination with spherical harmonics. Our approach avoids computing the lengthy indirect bounces and does not suffer from energy loss. Our experiments on multiple scenes show that our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods, and it can generalize to deal with solid objects with opaque surfaces as well.
翻译:场景的学习神经光亮场面最近允许对场景进行现实的新颖的视觉合成,但仅限于在原始固定的照明条件下合成图像。 因此, 场景的光亮、 现场编辑和场景构成等渴望完成的任务并不灵活。 为了解决这个问题, 最近的一些方法建议解开光亮场的反射和光化。 这些方法可以用不透明的表面处理固态物体, 但是却忽略了参与媒体。 此外, 它们只考虑到直接照明或最多一盎司间接光化, 从而由于忽略高端间接光化而蒙受能源损失。 我们提议学习参与媒体的神经表象, 全面模拟全球光化。 我们通过射线追踪和将间接光化与球形调相匹配来估计直接的光化。 我们的方法避免计算长的间接反弹,而不会受到能量损失。 我们在多个场景上进行的实验表明, 我们的方法与最先进的方法相比, 获得了更高的视觉质量和数字性表现, 并且可以笼统地用不透明的表面处理固态物体。