We present a modern solution to the multi-view photometric stereo problem (MVPS). Our work suitably exploits the image formation model in a MVPS experimental setup to recover the dense 3D reconstruction of an object from images. We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry. Contrary to the previous multi-staged framework to MVPS, where the position, iso-depth contours, or orientation measurements are estimated independently and then fused later, our method is simple to implement and realize. Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network. We render the MVPS images by considering the object's surface normals for each 3D sample point along the viewing direction rather than explicitly using the density gradient in the volume space via 3D occupancy information. We optimize the proposed neural radiance field representation for the MVPS setup efficiently using a fully connected deep network to recover the 3D geometry of an object. Extensive evaluation on the DiLiGenT-MV benchmark dataset shows that our method performs better than the approaches that perform only PS or only multi-view stereo (MVS) and provides comparable results against the state-of-the-art multi-stage fusion methods.
翻译:我们对多视图光度立体问题(MVPS)提出了一个现代解决方案。我们的工作恰当地利用了MVPS实验装置中的图像形成模型,从图像中恢复密集的三维重建对象。我们用光度立体立体(PS)成像模型采购了表面定向模型,并将它与多视图神经亮度外观外观外观外观外观外观外观外观外观外观外观外观外观外观外观外观外观外观外观,以恢复物体的地表位置、深度等深层外观外观外观或定向测量后再进行整合,我们的方法很简单。我们的方法是利用完全相连的深层网络进行多视图外观外观外观的神经外观,同时利用深光度立立立立立立立立立立立的平面正常面图面图。我们将MVPPPS图像的表面正常化图像与3D占用时空空间密度梯度梯度梯度梯度梯度梯度图比,我们只能用完全连接的深深深深深网络来恢复3D几维测。我们只能对多平台上的物体的多级平基模型进行深度的多维测。