We propose a new method for realistic human motion transfer using a generative adversarial network (GAN), which generates a motion video of a target character imitating actions of a source character, while maintaining high authenticity of the generated results. We tackle the problem by decoupling and recombining the posture information and appearance information of both the source and target characters. The innovation of our approach lies in the use of the projection of a reconstructed 3D human model as the condition of GAN to better maintain the structural integrity of transfer results in different poses. We further introduce a detail enhancement net to enhance the details of transfer results by exploiting the details in real source frames. Extensive experiments show that our approach yields better results both qualitatively and quantitatively than the state-of-the-art methods.
翻译:我们提出了一个使用基因对抗网络(GAN)进行现实的人类运动转移的新方法,这种网络产生一个模拟源性行动的目标性视频,同时保持所产生结果的高度真实性;我们通过脱钩和重新组合源和目标字符的姿态信息和外观信息来解决这个问题;我们的方法的革新在于使用经过重建的3D人类模型的预测作为GAN的条件,以更好地保持不同形态的转移结果的结构完整性;我们进一步引入一个详细的增强网,以通过在实际源框架中利用细节来加强转移结果的细节。广泛的实验表明,我们的方法在质量和数量上都比最先进的方法产生更好的效果。