Animating virtual avatars to make co-speech gestures facilitates various applications in human-machine interaction. The existing methods mainly rely on generative adversarial networks (GANs), which typically suffer from notorious mode collapse and unstable training, thus making it difficult to learn accurate audio-gesture joint distributions. In this work, we propose a novel diffusion-based framework, named Diffusion Co-Speech Gesture (DiffGesture), to effectively capture the cross-modal audio-to-gesture associations and preserve temporal coherence for high-fidelity audio-driven co-speech gesture generation. Specifically, we first establish the diffusion-conditional generation process on clips of skeleton sequences and audio to enable the whole framework. Then, a novel Diffusion Audio-Gesture Transformer is devised to better attend to the information from multiple modalities and model the long-term temporal dependency. Moreover, to eliminate temporal inconsistency, we propose an effective Diffusion Gesture Stabilizer with an annealed noise sampling strategy. Benefiting from the architectural advantages of diffusion models, we further incorporate implicit classifier-free guidance to trade off between diversity and gesture quality. Extensive experiments demonstrate that DiffGesture achieves state-of-theart performance, which renders coherent gestures with better mode coverage and stronger audio correlations. Code is available at https://github.com/Advocate99/DiffGesture.
翻译:模拟虚拟动画,以作出共同声音手势,有利于在人体机器互动中的各种应用。现有方法主要依靠基因对抗网络(GANs),这些网络通常受到臭名昭著的模式崩溃和不稳定的培训,因此很难获得准确的音响联合分布。在这项工作中,我们提出一个新的传播框架,名为Difmulation Co-Speater Gestration(DiffGesture),以有效捕捉跨模式的音频到眼的组合,并保持高纤维调调调调调调调调调调调调调调调调调调调调调调高调调调调调调高调调调调调调调调高调调调调调高调调调调调高调调调调调。具体地,我们首先在骨架序列和音频的剪辑剪辑上建立扩散条件生成过程,以促成整个框架。随后,我们设计了一个新型的Difmission Gesture Trange变换器来更好地关注来自多种模式的信息,此外,我们建议一个有效的Difmflation Grealalalalalalalal-dealalalal traction-dealal extradedeals lades</s>