A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.
翻译:计算机动画的一个根本问题是,在足够丰富的运动抓取短片的情况下,实现有目的和现实的人类运动。我们用自动递减性有条件自动变换器或运动VAEs来学习数据驱动的人类运动基因模型。学习的自动编码器的潜在变数决定了运动的行动空间,从而随着时间推移来调节其演变。计划或控制算法随后可以使用此动作空间产生所需的运动。特别是,我们利用深度强化学习来学习实现目标定向运动的控制器。我们展示了在多重任务上的方法的有效性。我们进一步评估了系统设计选择,并描述了运动VAEs目前的局限性。