Generative modeling of human motion has broad applications in computer animation, virtual reality, and robotics. Conventional approaches develop separate models for different motion synthesis tasks, and typically use a model of a small size to avoid overfitting the scarce data available in each setting. It remains an open question whether developing a single unified model is feasible, which may 1) benefit the acquirement of novel skills by combining skills learned from multiple tasks, and 2) help in increasing the model capacity without overfitting by combining multiple data sources. Unification is challenging because 1) it involves diverse control signals as well as targets of varying granularity, and 2) motion datasets may use different skeletons and default poses. In this paper, we present MoFusion, a framework for unified motion synthesis. MoFusion employs a Transformer backbone to ease the inclusion of diverse control signals via cross attention, and pretrains the backbone as a diffusion model to support multi-granularity synthesis ranging from motion completion of a body part to whole-body motion generation. It uses a learnable adapter to accommodate the differences between the default skeletons used by the pretraining and the fine-tuning data. Empirical results show that pretraining is vital for scaling the model size without overfitting, and demonstrate MoFusion's potential in various tasks, e.g., text-to-motion, motion completion, and zero-shot mixing of multiple control signals. Project page: \url{https://ofa-sys.github.io/MoFusion/}.
翻译:人类运动的生成模型在计算机动画、虚拟现实和机器人方面有着广泛的应用; 常规方法为不同的运动合成任务开发了不同的模型,通常使用小型模型,以避免在每种情况下过度使用稀缺的数据。 开发单一的统一模型是否可行,这仍然是个未决问题,可能1)通过将从多重任务中学到的技能结合起来,有利于获得新技能,2)帮助提高模型能力,而不会因合并多种数据来源而过度使用。 统一具有挑战性,因为1) 它涉及不同的控制信号以及不同颗粒性的目标,2) 运动数据集可能使用不同的骨骼和默认配置。在本文件中,我们介绍一个小型模型,即统一运动合成的框架。MoFusion使用一个变形的骨架,通过交叉关注来方便纳入不同的控制信号,并预先将骨架作为传播模型支持从身体部分的动作完成到整个身体运动生成的多面合成。 它使用一个可学习的适应器,以适应培训前使用的默认骨架和精细调数据之间的差异。 EmplicalFalalal 结果显示, 在不做模型/正动中, EmpliflictionFlistrual 上, 将前的进度显示前的缩缩缩缩缩缩缩成型任务。 。