Generating realistic motions for digital humans is time-consuming for many graphics applications. Data-driven motion synthesis approaches have seen solid progress in recent years through deep generative models. These results offer high-quality motions but typically suffer in motion style diversity. For the first time, we propose a framework using the denoising diffusion probabilistic model (DDPM) to synthesize styled human motions, integrating two tasks into one pipeline with increased style diversity compared with traditional motion synthesis methods. Experimental results show that our system can generate high-quality and diverse walking motions.
翻译:为数字人类创造现实的动作对于许多图形应用来说耗费时间。数据驱动动作合成方法近年来通过深厚的基因模型取得了扎实的进展。这些结果提供了高质量的动作,但通常在运动风格多样性中受到损害。我们第一次提出一个框架,使用无支配的传播概率模型(DDPM)将风格化的人类动作综合在一起,将两项任务合并成一条管道,与传统的运动合成方法相比,风格多样性增加。实验结果显示我们的系统可以产生高质量和多样化的行走动作。