Video prediction is a challenging computer vision task that has a wide range of applications. In this work, we present a new family of Transformer-based models for video prediction. Firstly, an efficient local spatial-temporal separation attention mechanism is proposed to reduce the complexity of standard Transformers. Then, a full autoregressive model, a partial autoregressive model and a non-autoregressive model are developed based on the new efficient Transformer. The partial autoregressive model has a similar performance with the full autoregressive model but a faster inference speed. The non-autoregressive model not only achieves a faster inference speed but also mitigates the quality degradation problem of the autoregressive counterparts, but it requires additional parameters and loss function for learning. Given the same attention mechanism, we conducted a comprehensive study to compare the proposed three video prediction variants. Experiments show that the proposed video prediction models are competitive with more complex state-of-the-art convolutional-LSTM based models. The source code is available at https://github.com/XiYe20/VPTR.
翻译:视频预测是一项具有挑战性的计算机愿景任务,具有广泛的应用范围。 在这项工作中,我们展示了一个新的基于变异器模型系列,用于视频预测。 首先,提出了高效的局部空间时空分离关注机制,以减少标准变异器的复杂性。然后,根据新的高效变异器开发了完整的自动递退模型、部分自动递退模型和非自动递退模型。部分自动递退模型具有与全自动递减模型相似的性能,但具有更快的引用速度。非反向模型不仅能够更快地推断速度,而且还可以减轻自动递减对应方的质量退化问题,但需要有额外的参数和学习损失功能。根据同样的关注机制,我们开展了一项全面研究,以比较拟议的三个视频预测变量。实验表明,拟议的视频预测模型与更为复杂的状态-艺术共转-LSTM模型具有竞争力。源代码见https://github.com/XiY20/VPTR。