Animating virtual avatars to make co-speech gestures facilitates various applications in human-machine interaction. The existing methods mainly rely on generative adversarial networks (GANs), which typically suffer from notorious mode collapse and unstable training, thus making it difficult to learn accurate audio-gesture joint distributions. In this work, we propose a novel diffusion-based framework, named Diffusion Co-Speech Gesture (DiffGesture), to effectively capture the cross-modal audio-to-gesture associations and preserve temporal coherence for high-fidelity audio-driven co-speech gesture generation. Specifically, we first establish the diffusion-conditional generation process on clips of skeleton sequences and audio to enable the whole framework. Then, a novel Diffusion Audio-Gesture Transformer is devised to better attend to the information from multiple modalities and model the long-term temporal dependency. Moreover, to eliminate temporal inconsistency, we propose an effective Diffusion Gesture Stabilizer with an annealed noise sampling strategy. Benefiting from the architectural advantages of diffusion models, we further incorporate implicit classifier-free guidance to trade off between diversity and gesture quality. Extensive experiments demonstrate that DiffGesture achieves state-of-theart performance, which renders coherent gestures with better mode coverage and stronger audio correlations. Code is available at https://github.com/Advocate99/DiffGesture.
翻译:将虚拟角色动画化以进行共言手势可以促进人机交互的各种应用。现有方法主要依赖于生成对抗网络(GANs),这些网络通常受到臭名昭著的模式崩溃和不稳定训练的影响,因此很难学习准确的音频-手势联合分布。在这项工作中,我们提出了一种新颖的基于扩散的框架,称为“Diffusion Co-Speech Gesture”(DiffGesture),可以有效地捕捉跨模态音频到手势的相关性,并保持高保真音频驱动的共言手势生成的时间上的连贯性。具体来说,我们首先建立基于扩散条件生成的过程,用于骨架序列和音频的剪辑,以启用整个框架。然后,我们设计了一种新型的Diffusion Audio-Gesture Transformer,以更好地关注多种模态的信息并模拟长期的时间依赖性。此外,为了消除时间不一致性,我们提出了一种有效的扩散姿势稳定器,具有一个退火噪声采样策略。受益于扩散模型的架构优势,我们进一步融入了隐式无分类器指导,以在多样性和手势质量之间权衡。广泛的实验表明,DiffGesture实现了最先进的性能,可以生成具有更好的模式覆盖和更强的音频相关性的连贯手势。代码可在https://github.com/Advocate99/DiffGesture获得。