Motion retargeting is a long-standing problem in character animation, which consists in transferring and adapting the motion of a source character to another target character. A typical application is the creation of motion sequences from off-the-shelf motions by transferring them onto new characters. Motion retargeting is also promising to increase interoperability of existing animation systems and motion databases, as they often differ in the structure of the skeleton(s) considered. Moreover, since the goal of motion retargeting is to abstract and transfer motion dynamics, effective solutions might coincide with expressive and powerful human motion models in which operations such as cleaning or editing are easier. In this article, we present a novel abstract representation of human motion agnostic to skeleton topology and morphology. Based on transformers, our model is able to encode and decode motion sequences with variable morphology and topology -- extending the scope of retargeting -- while supporting skeleton topologies not seen during the training phase. More specifically, our model is structured as an autoencoder and encoding and decoding are separately conditioned on skeleton templates to extract and control morphology and topology. Beyond motion retargeting, our model has many applications since our abstract representation is a convenient space to embed motion data from different sources. It may potentially be benefical to a number of data-driven methods, allowing them to combine scarce specialised motion datasets (e.g. with style or contact annotations) and larger general motion datasets for improved performance and generalisation ability. Moreover, we show that our model can be useful for other applications beyond retargeting, including motion denoising and joint upsampling.
翻译:暂无翻译