We propose to tackle the multiview photometric stereo problem using an extension of Neural Radiance Fields (NeRFs), conditioned on light source direction. The geometric part of our neural representation predicts surface normal direction, allowing us to reason about local surface reflectance. The appearance part of our neural representation is decomposed into a neural bidirectional reflectance function (BRDF), learnt as part of the fitting process, and a shadow prediction network (conditioned on light source direction) allowing us to model the apparent BRDF. This balance of learnt components with inductive biases based on physical image formation models allows us to extrapolate far from the light source and viewer directions observed during training. We demonstrate our approach on a multiview photometric stereo benchmark and show that competitive performance can be obtained with the neural density representation of a NeRF.
翻译:我们建议使用以光源方向为条件的神经辐射场(NERFs)扩展来应对多视光度立体问题。我们神经显示的几何部分预测了正常的表面方向,使我们得以了解当地表面反射情况。我们神经表现的外观部分已经分解成神经双向反射功能(BRDF),作为适当过程的一部分学习,以及影子预测网络(以光源方向为条件),使我们能够模拟明显的BRDF。这种以物理图像形成模型为基础的有感应偏差的熟识部件的平衡使我们能够从培训期间观察到的光源和查看器方向外推。我们展示了我们采用多视光度光度立体基准的方法,并表明通过NERF的神经密度代表可以取得竞争性的性能。