Traditional multi-view photometric stereo (MVPS) methods are often composed of multiple disjoint stages, resulting in noticeable accumulated errors. In this paper, we present a neural inverse rendering method for MVPS based on implicit representation. Given multi-view images of a non-Lambertian object illuminated by multiple unknown directional lights, our method jointly estimates the geometry, materials, and lights. Our method first employs multi-light images to estimate per-view surface normal maps, which are used to regularize the normals derived from the neural radiance field. It then jointly optimizes the surface normals, spatially-varying BRDFs, and lights based on a shadow-aware differentiable rendering layer. After optimization, the reconstructed object can be used for novel-view rendering, relighting, and material editing. Experiments on both synthetic and real datasets demonstrate that our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods. Our code and model can be found at https://ywq.github.io/psnerf.
翻译:传统的多视图光度立体( MVPS) 方法通常由多个断裂阶段组成, 导致明显累积错误 。 在本文中, 我们展示了基于隐含表示的 MVPS 神经反向转换方法 。 鉴于一个非Lambertian物体的多视图图像, 由多个未知方向灯照亮, 我们的方法共同估计了几何、 材料和灯光。 我们的方法首先使用多光图像来估计每个视图表面正常的地图, 用于规范从神经亮度场得出的正常图像 。 然后, 我们的代码和模型可以在 https://ywq. github.io/psnerf 上找到 。