Fast generation of high-quality 3D digital humans is important to a vast number of applications ranging from entertainment to professional concerns. Recent advances in differentiable rendering have enabled the training of 3D generative models without requiring 3D ground truths. However, the quality of the generated 3D humans still has much room to improve in terms of both fidelity and diversity. In this paper, we present Get3DHuman, a novel 3D human framework that can significantly boost the realism and diversity of the generated outcomes by only using a limited budget of 3D ground-truth data. Our key observation is that the 3D generator can profit from human-related priors learned through 2D human generators and 3D reconstructors. Specifically, we bridge the latent space of Get3DHuman with that of StyleGAN-Human via a specially-designed prior network, where the input latent code is mapped to the shape and texture feature volumes spanned by the pixel-aligned 3D reconstructor. The outcomes of the prior network are then leveraged as the supervisory signals for the main generator network. To ensure effective training, we further propose three tailored losses applied to the generated feature volumes and the intermediate feature maps. Extensive experiments demonstrate that Get3DHuman greatly outperforms the other state-of-the-art approaches and can support a wide range of applications including shape interpolation, shape re-texturing, and single-view reconstruction through latent inversion.
翻译:高品质的3D数字人类的快速生成,对于从娱乐到专业关注等众多应用都很重要。最近,在可变模型方面的进步使得培训3D基因模型无需3D地面真象,但是,生成的3D人类的质量在忠诚性和多样性方面仍有很大的改进空间。在本文中,我们介绍一个新型的3D人类框架Get3DHuman,这个3D人类框架只能利用3D地面真象数据的有限预算,才能大大促进所产生结果的现实主义和多样性。我们的主要观察是,3D发电机可以从通过2D人类发电机和3D重建者所学的与人类有关的前科模型中获益。具体地说,我们通过一个专门设计的前网络,将Get3D人类与SystemGAN-Human的潜在空间连接起来,在这个网络中,输入的潜值代码与形状和纹理特征的体积相匹配。然后将前一个网络的结果作为主要发电机网络的监督信号加以利用。为了确保有效培训,我们进一步提出三个定制的3D型底色模型的模型,包括从生成的深度图和中间版图中展示。