3D-controllable portrait synthesis has significantly advanced, thanks to breakthroughs in generative adversarial networks (GANs). However, it is still challenging to manipulate existing face images with precise 3D control. While concatenating GAN inversion and a 3D-aware, noise-to-image GAN is a straight-forward solution, it is inefficient and may lead to noticeable drop in editing quality. To fill this gap, we propose 3D-FM GAN, a novel conditional GAN framework designed specifically for 3D-controllable face manipulation, and does not require any tuning after the end-to-end learning phase. By carefully encoding both the input face image and a physically-based rendering of 3D edits into a StyleGAN's latent spaces, our image generator provides high-quality, identity-preserved, 3D-controllable face manipulation. To effectively learn such novel framework, we develop two essential training strategies and a novel multiplicative co-modulation architecture that improves significantly upon naive schemes. With extensive evaluations, we show that our method outperforms the prior arts on various tasks, with better editability, stronger identity preservation, and higher photo-realism. In addition, we demonstrate a better generalizability of our design on large pose editing and out-of-domain images.
翻译:3D 可控的肖像合成由于基因对抗网络(GANs)的突破而取得了显著的进展。然而,用精确的 3D 控制 3D 控制 控制 3D 控制 3D 控制 肖像合成已经取得了显著的进展。 但是,用精确的 3D 控制 3D 控制 控制 控制 控制 3D 网络(GANs) 来操作现有面部图像仍然具有挑战性。 在将 GAN 转换成 GAN 和 3D 图像成像 3D 的 3D 图像 的同时, 噪音到 GAN 是一个直线前进的解决方案, 效率低下, 可能导致编辑质量明显下降。 为了有效地学习这种新颖的框架, 我们开发了两个基本的培训战略和一个新颖的多复制的共调制架构, 大大改进了天性计划。 通过广泛的评估, 我们展示了我们的方法在各种任务上比以往的艺术更高级, 更精确地展示了我们更精确的图像 。