3D-aware GANs based on generative neural radiance fields (GNeRF) have achieved impressive high-quality image generation, while preserving strong 3D consistency. The most notable achievements are made in the face generation domain. However, most of these models focus on improving view consistency but neglect a disentanglement aspect, thus these models cannot provide high-quality semantic/attribute control over generation. To this end, we introduce a conditional GNeRF model that uses specific attribute labels as input in order to improve the controllabilities and disentangling abilities of 3D-aware generative models. We utilize the pre-trained 3D-aware model as the basis and integrate a dual-branches attribute-editing module (DAEM), that utilize attribute labels to provide control over generation. Moreover, we propose a TRIOT (TRaining as Init, and Optimizing for Tuning) method to optimize the latent vector to improve the precision of the attribute-editing further. Extensive experiments on the widely used FFHQ show that our model yields high-quality editing with better view consistency while preserving the non-target regions. The code is available at https://github.com/zhangqianhui/TT-GNeRF.
翻译:3D-aware GAN基于基因神经光亮场(GNERF)的3D-AVER GANs在保持3D的高度一致性的同时,实现了令人印象深刻的高质量图像生成,同时保持了3D的高度一致性。最显著的成就是在面对面的生成领域取得的。然而,这些模型大多侧重于提高视觉一致性,但忽视了分解的方面,因此这些模型无法提供高质量的对下一代的语义/属性控制。为此,我们引入了一种有条件的GNERRF模型,该模型使用特定的属性标签作为投入,以提高3D-AWER基因模型的控制性和分错位能力。我们在广泛使用的FFHQ上进行的广泛实验显示,我们模型的3D-AWAWA模型作为基础,并整合了使用属性标签来控制一代的双重组合编辑模块(DAEM)。此外,我们建议采用TRIOT(作为 Init) 和 Appiming for TURing) 方法,优化潜在矢量矢量,以提高属性编辑的精确度。在广泛使用的FFHQ上进行的广泛实验显示我们模型的高质量编辑质量/MQADUTUDO)是更好的目标,同时保存非目标。