Most modern video recognition models are designed to operate on short video clips (e.g., 5-10s in length). Thus, it is challenging to apply such models to long movie understanding tasks, which typically require sophisticated long-range temporal reasoning. The recently introduced video transformers partially address this issue by using long-range temporal self-attention. However, due to the quadratic cost of self-attention, such models are often costly and impractical to use. Instead, we propose ViS4mer, an efficient long-range video model that combines the strengths of self-attention and the recently introduced structured state-space sequence (S4) layer. Our model uses a standard Transformer encoder for short-range spatiotemporal feature extraction, and a multi-scale temporal S4 decoder for subsequent long-range temporal reasoning. By progressively reducing the spatiotemporal feature resolution and channel dimension at each decoder layer, ViS4mer learns complex long-range spatiotemporal dependencies in a video. Furthermore, ViS4mer is $2.63\times$ faster and requires $8\times$ less GPU memory than the corresponding pure self-attention-based model. Additionally, ViS4mer achieves state-of-the-art results in $6$ out of $9$ long-form movie video classification tasks on the Long Video Understanding (LVU) benchmark. Furthermore, we show that our approach successfully generalizes to other domains, achieving competitive results on the Breakfast and the COIN procedural activity datasets. The code is publicly available at: https://github.com/md-mohaiminul/ViS4mer.
翻译:多数现代视频识别模式的设计都是在短视频剪辑上运行(例如,长度为5-10),因此,将这类模型应用于长期电影理解任务具有挑战性,通常需要复杂的远程时间推理。最近推出的视频变压器通过使用远程时间自省来部分解决这一问题。然而,由于自我关注的二次成本,这些模型往往成本高且不切实际。相反,我们提议Vis4mer,这是一个高效的远程视频模型,结合了自我关注和最近引入的结构性州空间序列(S4)的优势。我们的模型使用标准的变压器编码器用于短远程时空特征提取,而随后的远程时间推理则则使用多尺度的Team S4变压器。通过逐步减少每个脱色层的超时空特性解析和频道维S4mer在视频中学习复杂的远程随机依赖性。此外, Vis4mer是基于自控的智能空间序列(以2.63美元为基) 快速和需要8美元的标准的变换时间,在常规磁数据分类中显示比GPO的常规的自我定位活动要少。