Recent progress in stochastic motion prediction, i.e., predicting multiple possible future human motions given a single past pose sequence, has led to producing truly diverse future motions and even providing control over the motion of some body parts. However, to achieve this, the state-of-the-art method requires learning several mappings for diversity and a dedicated model for controllable motion prediction. In this paper, we introduce a unified deep generative network for both diverse and controllable motion prediction. To this end, we leverage the intuition that realistic human motions consist of smooth sequences of valid poses, and that, given limited data, learning a pose prior is much more tractable than a motion one. We therefore design a generator that predicts the motion of different body parts sequentially, and introduce a normalizing flow based pose prior, together with a joint angle loss, to achieve motion realism.Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy. The code is available at https://github.com/wei-mao-2019/gsps
翻译:然而,为了实现这一点,最先进的方法需要学习多种多样性图谱和可控运动预测的专用模型。在本文件中,我们引入了多样化和可控运动预测的统一深度基因化网络。为此,我们利用现实的人类运动由有效姿势的平稳序列构成的直觉,而根据有限的数据,先学一种姿势比运动更能引力。因此,我们设计出一种发电机,按顺序预测不同身体部位的动作,并引入一种基于先成形的正常流,加上一个联合角度损失,以达到运动真实性。我们在人类3.6M和人类Eva-I这两个标准基准数据集上的实验表明,我们的方法在样本多样性和准确性方面都超过了最先进的基线。这个代码可以在 https://github.com/wemaus19/wemaus/wemaus19/comms-20 上查到。