Designing realistic digital humans is extremely complex. Most data-driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural-network-based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state-of-the-art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models.
翻译:数字人类的真实感设计非常复杂。大多数用于简化其基础几何形状创建的数据驱动生成模型并不能控制局部形状属性的生成。本文引入一种新颖的损失函数,基于谱几何理论,适用于不同类型的神经网络生成模型,如三维头部和身体网格的变分自编码器(VAEs)或生成对抗网络(GANs),鼓励其潜在变量遵循身份属性的局部特征投影,从而改善潜在变量的分离和属性生成的耦合。实验结果表明,我们的局部特征投影分离(LED)模型不仅相对于最新技术水平具有更好的分离效果,而且生成质量也很好,训练时间可与基于传统方法的模型实现相媲美。