Human motion prediction is a necessary component for many applications in robotics and autonomous driving. Recent methods propose using sequence-to-sequence deep learning models to tackle this problem. However, they do not focus on exploiting different temporal scales for different length inputs. We argue that the diverse temporal scales are important as they allow us to look at the past frames with different receptive fields, which can lead to better predictions. In this paper, we propose a Temporal Inception Module (TIM) to encode human motion. Making use of TIM, our framework produces input embeddings using convolutional layers, by using different kernel sizes for different input lengths. The experimental results on standard motion prediction benchmark datasets Human3.6M and CMU motion capture dataset show that our approach consistently outperforms the state of the art methods.
翻译:人类运动预测是机器人和自主驱动中许多应用的必要组成部分。 最近的方法建议使用顺序到顺序的深层次学习模型来解决这个问题。 但是,它们并不侧重于利用不同的时间尺度来进行不同长度的投入。 我们认为,不同的时间尺度很重要,因为它们使我们能够用不同的可接受域来查看过去的框架,这可以导致更好的预测。 在本文件中,我们建议使用一个时空感知模块(TIM)来编码人类运动。利用TIM,我们的框架通过使用不同的内核大小来利用革命层进行输入嵌入,使用不同的输入长度。标准运动预测基准数据集人类3.6M 和 CMU运动的实验结果显示,我们的方法始终超越了艺术方法的状态。