It is a challenging task to learn rich and multi-scale spatiotemporal semantics from high-dimensional videos, due to large local redundancy and complex global dependency between video frames. The recent advances in this research have been mainly driven by 3D convolutional neural networks and vision transformers. Although 3D convolution can efficiently aggregate local context to suppress local redundancy from a small 3D neighborhood, it lacks the capability to capture global dependency because of the limited receptive field. Alternatively, vision transformers can effectively capture long-range dependency by self-attention mechanism, while having the limitation on reducing local redundancy with blind similarity comparison among all the tokens in each layer. Based on these observations, we propose a novel Unified transFormer (UniFormer) which seamlessly integrates merits of 3D convolution and spatiotemporal self-attention in a concise transformer format, and achieves a preferable balance between computation and accuracy. Different from traditional transformers, our relation aggregator can tackle both spatiotemporal redundancy and dependency, by learning local and global token affinity respectively in shallow and deep layers. We conduct extensive experiments on the popular video benchmarks, e.g., Kinetics-400, Kinetics-600, and Something-Something V1&V2. With only ImageNet-1K pretraining, our UniFormer achieves 82.9%/84.8% top-1 accuracy on Kinetics-400/Kinetics-600, while requiring 10x fewer GFLOPs than other state-of-the-art methods. For Something-Something V1 and V2, our UniFormer achieves new state-of-the-art performances of 60.9% and 71.2% top-1 accuracy respectively. Code is available at https://github.com/Sense-X/UniFormer.
翻译:从高维视频中学习丰富和多尺度的随机语义是一项艰巨的任务,原因是本地的冗余和视频框架之间的复杂全球依赖性,因此从高维视频中学习丰富和多尺度的VX2型语义。这一研究最近的进展主要是由3D进化神经网络和视觉变异器驱动的。虽然3D进化可以有效地整合本地环境,以抑制3D小区局部的裁断,但由于接收场有限,它缺乏捕捉全球依赖性的能力。或者,视觉变异器可以通过自我保护机制有效地捕捉到600级长距离的依赖性,同时限制以每层所有符号的相近性比较来减少本地冗余。基于这些观察,我们提出了一个新的“Unifer Transformer Transformer” (Universal Formeral Formeral) (Universal Formeral) (Universal) (Universal Flational-Oral-Oral-Oral-I) (VI.S-IFinal-Oral-I.) (VI.I.) II-IFinal-I.) 和Oral-I.I.I.I.I.I.I.I.I.I.I.I.I-I-I-I-I-I.I.I.I.I.I.I.I.