Recent advances in implicit neural representation have demonstrated the ability to recover detailed geometry and material from multi-view images. However, the use of simplified lighting models such as environment maps to represent non-distant illumination, or using a network to fit indirect light modeling without a solid basis, can lead to an undesirable decomposition between lighting and material. To address this, we propose a fully differentiable framework named neural ambient illumination (NeAI) that uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way. Together with integral lobe encoding for roughness-adaptive specular lobe and leveraging the pre-convoluted background for accurate decomposition, the proposed method represents a significant step towards integrating physically based rendering into the NeRF representation. The experiments demonstrate the superior performance of novel-view rendering compared to previous works, and the capability to re-render objects under arbitrary NeRF-style environments opens up exciting possibilities for bridging the gap between virtual and real-world scenes. The project and supplementary materials are available at https://yiyuzhuang.github.io/NeAI/.
翻译:最近,隐式神经表示的进展表明其能够从多视图图像中恢复出详细的几何和材质。但是,使用环境贴图表示非远程照明或使用网络来拟合缺乏坚实基础的间接光照模型,可能导致照明和材质之间的不良分解。为解决这个问题,我们提出了一个完全可微的框架,名为神经环境光照(NeAI),使用神经辐射场(NeRF)作为光照模型,以物理为基础地处理复杂的照明。结合基于粗糙度自适应镜面反射率的积分角编码和利用预卷积背景进行精确分解,所提出的方法代表了将基于物理渲染整合到NeRF表示的重要进展。实验证明了与之前的作品相比新视图渲染的优越表现,并且在任意NeRF风格环境下重新渲染对象的能力开启了在虚拟和现实世界场景之间架起巨大鸿沟的令人兴奋的可能性。该项目和补充材料可在https://yiyuzhuang.github.io/NeAI/上获得。