3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images even for topology-varying object categories. However, these methods still lack the capability to separately control the shape and appearance of the objects in the generated radiance fields. In this paper, we propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations. Our method generates deformable radiance fields, which builds the dense correspondence between the density fields of the objects and encodes their appearances in a shared template field. Our disentanglement is achieved in an unsupervised manner without introducing extra labels to previous 3D-aware GAN training. We also develop an effective image inversion scheme for reconstructing the radiance field of an object in a real monocular image and manipulating its shape and appearance. Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e.g., chairs) with large topological variance. The model trained on synthetic data can faithfully reconstruct the real object in a given single image and achieve high-quality texture and shape editing results.
翻译:3D- 觉察到的基因模型展示了它们从单镜 2D 图像的集合中生成 3D 神经光亮场的超级性能, 即使是在表层变化的对象类别中也是如此。 然而, 这些方法仍然缺乏单独控制生成的光亮场中天体的形状和外观的能力。 在本文中, 我们提出一个将具有分解形状和外观变异的表象物体的亮光场合成为一体的亮光场的超级性能模型。 我们的方法产生可变光场, 建立对象密度区域之间的稠密对应, 并在共享的模板字段中将物体的外观编码编码。 我们的分解以不受监督的方式实现, 而没有在以前的 3D- 觉 GAN 培训中引入额外的标签。 我们还开发了一个有效的图像转换方案, 用于在真实的单镜像图像中重建对象的亮光场区域, 并操纵其形状和外观。 实验显示, 我们的方法能够成功地从未结构的单形图像中学习基因模型, 并且将物体的形状和外观( e.g. scarenn refilledal dealal graphyal graphillal graphyal) ex redududeal ex exalalalalaldalalal 和 gradudududealalal maphyaldaldaldaldaldaldaldaldaldaldaldaldaldaldaldald) 。