This paper proposes a practical photometric solution for the challenging problem of in-the-wild inverse rendering under unknown ambient lighting. Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone. The key idea is to exploit smartphone's built-in flashlight as a minimally controlled light source, and decompose image intensities into two photometric components -- a static appearance corresponds to ambient flux, plus a dynamic reflection induced by the moving flashlight. Our method does not require flash/non-flash images to be captured in pairs. Building on the success of neural light fields, we use an off-the-shelf method to capture the ambient reflections, while the flashlight component enables physically accurate photometric constraints to decouple reflectance and illumination. Compared to existing inverse rendering methods, our setup is applicable to non-darkroom environments yet sidesteps the inherent difficulties of explicit solving ambient reflections. We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques. Finally, our neural reconstruction can be easily exported to PBR textured triangle mesh ready for industrial renderers.
翻译:本文提出了一种实用的光度学解决方案,用于在未知环境光条件下解决野外反渲染问题。我们的系统仅使用智能手机拍摄的多视角图像就能恢复场景几何和反射率。关键思想是利用智能手机的内置手电筒作为最小控制光源,并将图像强度分解成两个光度学分量——静态外观对应于环境光通量,加上手电筒引起的动态反射。我们的方法不需要捕捉闪光/非闪光图像成对出现。建立在神经光场的成功经验上,我们使用现成方法捕捉环境反射,而手电筒组件则使得物理精确的光度学约束能够解耦反射和照明。与现有的反渲染方法相比,我们的设置适用于非黑房环境,但避开了显式解决环境反射的固有困难。通过广泛的实验证明,我们的方法易于实现,设置随意,且始终优于现有的野外反渲染技术。最后,我们的神经重建可以轻松导出为PBR纹理三角形网格,随时准备进入工业渲染器。