Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured objects. More recent neural inverse rendering approaches have enabled object relighting, but they represent surface properties as simple BRDFs, and therefore cannot handle translucent objects. We propose Object-Centric Neural Scattering Functions (OSFs) for learning to reconstruct object appearance from only images. OSFs not only support free-viewpoint object relighting, but also can model both opaque and translucent objects. While accurately modeling subsurface light transport for translucent objects can be highly complex and even intractable for neural methods, OSFs learn to approximate the radiance transfer from a distant light to an outgoing direction at any spatial location. This approximation avoids explicitly modeling complex subsurface scattering, making learning a neural implicit model tractable. Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects, allowing faithful free-viewpoint relighting as well as scene composition.
翻译:从 2D 图像建模的摄影现实对象外观是视觉和图形中一个不变的主题。 虽然神经隐含方法( 如神经辐射场等) 已经展示了高纤维化合成结果, 它们无法点亮被捕获的物体。 更近的神经反向转换方法使天体能够点亮, 但是它们代表着简单的 BRDF 的表面特性, 因而无法处理透明对象 。 我们提议对象- 核心神经散射功能( OSF) 用于学习从图像中重建对象外观。 OSF 不仅支持自由视图点对象的重新点亮, 还可以建模不透明对象和半透明对象。 虽然精确地模拟半透明对象的地表下光传输可能非常复杂, 甚至对神经方法来说是难以操作的, 但是 OSFSF 学会将光从远处的光向任何空间位置的向外向转移相近。 这种近距离避免明确地建模复杂的地表下表面散射,, 学习一个隐性的模型可感光学。 在真实和合成数据实验中显示 OSFSF 准确地将外观的外观作为不透明和透视场的构造。</s>