Spatiotemporal predictive learning, which predicts future frames through historical prior knowledge with the aid of deep learning, is widely used in many fields. Previous work essentially improves the model performance by widening or deepening the network, but it also brings surging memory overhead, which seriously hinders the development and application of this technology. In order to improve the performance without increasing memory consumption, we focus on scale, which is another dimension to improve model performance but with low memory requirement. The effectiveness has been widely proved in many CNN-based tasks such as image classification and semantic segmentation, but it has not been fully explored in recent RNN models. In this paper, learning from the benefit of multi-scale, we propose a general framework named Multi-Scale RNN (MS-RNN) to boost recent RNN models for spatiotemporal predictive learning. By integrating different scales, we enhance the existing models with both improved performance and greatly reduced overhead. We verify our MS-RNN framework by exhaustive experiments with 6 popular RNN models (ConvLSTM, TrajGRU, PredRNN, PredRNN++, MIM, and MotionRNN) on 4 different datasets (Moving MNIST, KTH, TaxiBJ, and HKO-7). The results show the efficiency that the RNN models incorporating our framework have much lower memory cost but better performance than before. Our code is released at \url{https://github.com/mazhf/MS-RNN}.
翻译:以往的工作通过扩大或深化网络,从根本上改善了模型性能,但同时也带来了巨大的记忆管理责任,这严重阻碍了这一技术的开发和应用。为了在不增加记忆消耗的情况下提高性能,我们注重规模,这是提高模型性能的另一个方面,但记忆要求低,这是改进模型性能的另一个方面。在基于CNN的许多任务中,例如图像分类和语义分割等,其有效性得到了广泛的证明,但在最近的RNN模型中,这种有效性没有得到充分探讨。在本文件中,从多尺度的好处中学习,我们提出了一个名为多层次RNN(MS-RNN)的总框架,以提升最近的超规模预测性学模型。通过整合不同的尺度,我们用改进性能和大大降低的存储性能要求来增强现有的模型。我们通过对6种流行的 RNNN(CLSTM、TrajGRRRU、PredRNNNT、MIM、MIM和MONNNM) 的完整实验来验证我们的MS-NV 4个数据框架,在RMI/NMMMMMMS 之前, 将我们的数据效率展示了4的模型。