We present 3DHumanGAN, a 3D-aware generative adversarial network (GAN) that synthesizes images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it allows us to harness the power of 2D GANs to generate photo-realistic images; ii) it generates consistent images under varying view-angles and specifiable poses; iii) the model can benefit from the 3D human prior. Our model is adversarially learned from a collection of web images needless of manual annotation.
翻译:我们展示了3DHOHRGAN, 3D认知的基因对抗网络(GAN), 将全体人类的图像与不同视角和身体角度下一致的外观合成在一起。为了应对在合成人的身体结构中代表性和计算性的挑战,我们提议了一个新的生成结构,在这个结构中, 2D 革命骨架由 3D 面貌映像网络调节。 3D 形形相映像网络是建立在 3D 人类外观上的一个可移植的隐含功能。 这个设计有若干优点 : i) 它使我们能够利用 2D GAN 的力量生成摄影现实图像; ii) 它产生不同视觉形形形形和外观外观外观外观外观外观的一致图像;iii) 该模型可以从 3D 人类之前的3D 人类图象中受益。 我们的模型是从一个不需要人工注解的网络图象收集中获得的对抗性学习。