3D reconstruction from images has wide applications in Virtual Reality and Automatic Driving, where the precision requirement is very high. Ground-breaking research in the neural radiance field (NeRF) by utilizing Multi-Layer Perceptions has dramatically improved the representation quality of 3D objects. Some later studies improved NeRF by building truncated signed distance fields (TSDFs) but still suffer from the problem of blurred surfaces in 3D reconstruction. In this work, this surface ambiguity is addressed by proposing a novel way of 3D shape representation, OmniNeRF. It is based on training a hybrid implicit field of Omni-directional Distance Field (ODF) and neural radiance field, replacing the apparent density in NeRF with omnidirectional information. Moreover, we introduce additional supervision on the depth map to further improve reconstruction quality. The proposed method has been proven to effectively deal with NeRF defects at the edges of the surface reconstruction, providing higher quality 3D scene reconstruction results.
翻译:3D图像的重建在虚拟现实和自动驱动中有着广泛的应用,因为虚拟现实和自动驱动的精确要求非常高。利用多层感知对神经光度场的突破性研究极大地提高了3D物体的显示质量。一些后来的研究通过建造短径的签字距离场改进了NERF,但在3D重建中仍然受到模糊表面问题的影响。在这项工作中,通过提出3D形状代表的新方式OmniNERRF解决了表面的模糊性。它基于培训一个Omni-直线距离场和神经光度场的混合隐含领域,用全向信息取代NERF的表面密度。此外,我们对深度地图进行更多的监督,以进一步提高重建质量。已经证明拟议的方法有效地解决了地表重建边缘的NERF缺陷,提供了质量更高的3D现场重建结果。