Transformer achieves remarkable successes in understanding 1 and 2-dimensional signals (e.g., NLP and Image Content Understanding). As a potential alternative to convolutional neural networks, it shares merits of strong interpretability, high discriminative power on hyper-scale data, and flexibility in processing varying length inputs. However, its encoders naturally contain computational intensive operations such as pair-wise self-attention, incurring heavy computational burden when being applied on the complex 3-dimensional video signals. This paper presents Token Shift Module (i.e., TokShift), a novel, zero-parameter, zero-FLOPs operator, for modeling temporal relations within each transformer encoder. Specifically, the TokShift barely temporally shifts partial [Class] token features back-and-forth across adjacent frames. Then, we densely plug the module into each encoder of a plain 2D vision transformer for learning 3D video representation. It is worth noticing that our TokShift transformer is a pure convolutional-free video transformer pilot with computational efficiency for video understanding. Experiments on standard benchmarks verify its robustness, effectiveness, and efficiency. Particularly, with input clips of 8/12 frames, the TokShift transformer achieves SOTA precision: 79.83%/80.40% on the Kinetics-400, 66.56% on EGTEA-Gaze+, and 96.80% on UCF-101 datasets, comparable or better than existing SOTA convolutional counterparts. Our code is open-sourced in: https://github.com/VideoNetworks/TokShift-Transformer.
翻译:变异器在理解 1 和 2 维信号( 如 NLP 和 图像内容理解 ) 中取得了显著的成功。 作为革命性神经网络的潜在替代物,它分享了强大的解释性、超比例数据高度歧视力和处理不同长度投入的灵活性等优点。 然而,它的编码器自然包含双向自我注意等计算密集操作,在对复杂的三维视频信号应用时会给3维视频信号带来沉重的计算负担。 本文展示了Token Shift 模块( 即 TokShift对等方 ), 一个新型的、零参数、 零FLOP 操作器, 用于在每一个变异器编码器编码器内建模时间关系。 具体地说, 托克希ft 几乎没有时间将部分[ 符号反向和远方移动。 然后, 我们将模块插入一个简单的 2D 视野变异器的每个编码器, 学习 3D 视频演示。 值得注意的是, 我们的托克Shift 变异器变异器是一个纯的变异器变异器变异器, 变异器在计算器上有一个新的零参数, 可比的试器, 精确的试器, 准确性变码操作效率, 不断变换的GLOV.