Neural Radiance Fields (NeRF) have constituted a remarkable breakthrough in image-based 3D reconstruction. However, their implicit volumetric representations differ significantly from the widely-adopted polygonal meshes and lack support from common 3D software and hardware, making their rendering and manipulation inefficient. To overcome this limitation, we present a novel framework that generates textured surface meshes from images. Our approach begins by efficiently initializing the geometry and view-dependency decomposed appearance with a NeRF. Subsequently, a coarse mesh is extracted, and an iterative surface refining algorithm is developed to adaptively adjust both vertex positions and face density based on re-projected rendering errors. We jointly refine the appearance with geometry and bake it into texture images for real-time rendering. Extensive experiments demonstrate that our method achieves superior mesh quality and competitive rendering quality.
翻译:在基于图像的 3D 重建中,神经辐射场(NeRF) 是一个显著的突破,然而,其隐含的体积表示与广泛采用的多边多边网膜有很大不同,缺乏来自通用的 3D 软件和硬件的支持,造成其生成和操纵效率低下。为了克服这一限制,我们提出了一个从图像中产生纹理表面草丝的新框架。我们的方法始于与 NERF 一起有效地初始化几何学和视视-视-独立外观。随后,提取了一个粗微网目,并开发了一个迭代表面精炼算法,以适应性地调整脊椎位置和基于重新预测的生成错误而面对的密度。我们用几何方法共同改进外观,将其制成实时生成的纹理图像。广泛的实验表明,我们的方法实现了较高的网格质量和竞争性的制作质量。</s>