The neural radiance field (NeRF) has shown promising results in preserving the fine details of objects and scenes. However, unlike mesh-based representations, it remains an open problem to build dense correspondences across different NeRFs of the same category, which is essential in many downstream tasks. The main difficulties of this problem lie in the implicit nature of NeRF and the lack of ground-truth correspondence annotations. In this paper, we show it is possible to bypass these challenges by leveraging the rich semantics and structural priors encapsulated in a pre-trained NeRF-based GAN. Specifically, we exploit such priors from three aspects, namely 1) a dual deformation field that takes latent codes as global structural indicators, 2) a learning objective that regards generator features as geometric-aware local descriptors, and 3) a source of infinite object-specific NeRF samples. Our experiments demonstrate that such priors lead to 3D dense correspondence that is accurate, smooth, and robust. We also show that established dense correspondence across NeRFs can effectively enable many NeRF-based downstream applications such as texture transfer.
翻译:神经光亮场(NERF)在保存物体和场景的精细细节方面显示出了可喜的成果,然而,与基于网状的表示方式不同,在同一个类别不同的内分体之间建造密集的通信仍是一个公开的问题,而同一类别不同的内分体对于许多下游任务来说是必不可少的。这个问题的主要困难在于内分体的隐含性质和缺乏地面实况的通信说明。在本文中,我们表明,通过利用预先培训过的内分体GAN所包涵的丰富的语义和结构前科,可以绕过这些挑战。具体地说,我们利用了三个方面的前科,即:(1) 将潜伏代码作为全球结构指标的双重变形领域;(2) 将发电机特征视为几何自觉局部描述器的学习目标;(3) 无限物体专用内分体样本的来源。我们的实验表明,这种前科导致3D密度的通信,准确、顺畅和稳健。我们还表明,NERFs之间已建立的密集通信能够有效地使许多内分体的下游应用,如质传输。