Vision transformers (ViTs) have become the popular structures and outperformed convolutional neural networks (CNNs) on various vision tasks. However, such powerful transformers bring a huge computation burden. And the essential barrier behind this is the exhausting token-to-token comparison. To alleviate this, we delve deeply into the model properties of ViT and observe that ViTs exhibit sparse attention with high token similarity. This intuitively introduces us a feasible structure-agnostic dimension, token number, to reduce the computational cost. Based on this exploration, we propose a generic self-slimmed learning approach for vanilla ViTs, namely SiT. Specifically, we first design a novel Token Slimming Module (TSM), which can boost the inference efficiency of ViTs by dynamic token aggregation. Different from the token hard dropping, our TSM softly integrates redundant tokens into fewer informative ones, which can dynamically zoom visual attention without cutting off discriminative token relations in the images. Furthermore, we introduce a concise Dense Knowledge Distillation (DKD) framework, which densely transfers unorganized token information in a flexible auto-encoder manner. Due to the similar structure between teacher and student, our framework can effectively leverage structure knowledge for better convergence. Finally, we conduct extensive experiments to evaluate our SiT. It demonstrates that our method can speed up ViTs by 1.7x with negligible accuracy drop, and even speed up ViTs by 3.6x while maintaining 97% of their performance. Surprisingly, by simply arming LV-ViT with our SiT, we achieve new state-of-the-art performance on ImageNet, surpassing all the CNNs and ViTs in the recent literature.
翻译:视觉变压器(Viet 变压器) 已经成为各种视觉任务中流行的结构,并且已经超越了各种视觉任务。 然而,如此强大的变压器带来了巨大的计算负担。 其基本屏障是穷尽的象征性比重比较。 为了缓解这一点, 我们深入探索ViT的模型属性, 并观察到ViT的注意力少一些, 具有非常相似的象征意义。 这种直观的引入了一种可行的结构- 认知层面, 象征号, 以降低计算成本。 基于这一探索, 我们提出了一种通用的Vanilla ViT( 即SiT) 自我滑动学习方法。 具体地说, 我们首先设计了一个新型的Token Slimmoming 模块(TM ), 它可以通过动态象征性汇总来提高 ViT 的推断效率。 我们的T 将多余的信号软化图像放大视觉关注度, 而不会切断图像中歧视性的图像关系。 此外, 我们用新的Dense On- dust Stilling Stilling (D KD) 格式保持新的图像保存(DKDVielding Stilling Stilling Stilling) 框架, 和我们学生的快速递缩缩化整个结构, 系统化的动作结构, 系统化结构能化地转换了我们所有的系统化结构, 我们的图像, 将整个的动作化的动作化的动作化结构可以使我们制成的图像。