Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based multi-layer perceptrons (MLPs) to learn a continuous scene representation. However, existing approaches, such as Neural Radiance Field (NeRF) and its variants, usually require dense input views (i.e. 50-150) to obtain decent results. To relive the over-dependence on massive calibrated images and enrich the coordinate-based feature representation, we explore injecting the prior information into the coordinate-based network and introduce a novel coordinate-based model, CoCo-INR, for implicit neural 3D representation. The cores of our method are two attention modules: codebook attention and coordinate attention. The former extracts the useful prototypes containing rich geometry and appearance information from the prior codebook, and the latter propagates such prior information into each coordinate and enriches its feature representation for a scene or object surface. With the help of the prior information, our method can render 3D views with more photo-realistic appearance and geometries than the current methods using fewer calibrated images available. Experiments on various scene reconstruction datasets, including DTU and BlendedMVS, and the full 3D head reconstruction dataset, H3DS, demonstrate the robustness under fewer input views and fine detail-preserving capability of our proposed method.
翻译:表面或场景重建以及新颖的视觉合成工作取得了令人印象深刻的成果,通常使用基于协调的多层显示器(MLPs)来学习连续的场景代表,然而,现有的方法,如神经辐射场(NERF)及其变体,通常需要密集的输入视图(即50-150)才能取得体面的结果。为了重新体验对大规模校准图像的过度依赖并丰富基于协调的特征代表,我们探索将先前的信息注入基于协调的网络,并引入一个新的基于协调的模型,即Co-INR,用于隐含的神经3D代表。我们的方法的核心是两个关注模块:代码表关注和协调关注。前者提取了含有先前代码手册中丰富的几何测量和外观信息的有用原型(即50-150)以获得体面的结果。为了将这种先前的信息传播到每个协调中,并丰富其在场景或天体表面的特征代表。在先前的信息的帮助下,我们的方法可以比目前使用的精细方法,即Co-INR,Co-INR,用于隐含的神经3D代表。我们的方法的核心是两个模块单元:代码关注和协调关注和协调的注意和协调关注和协调的注意和协调,前的注意。前几个模型的实验中含有的模型的模型的模型的模型,在演示中演示中,在演示的重建中,包括现有模型和模型的模型的模型的模型的模型的模型的更新能力下,在模型的精确度的模型和模型的精确度的精确度的模型的模型和模型的模型的模型。