Vision transformers (ViTs) have become the popular structures and outperformed convolutional neural networks (CNNs) on various vision tasks. However, such powerful transformers bring a huge computation burden, because of the exhausting token-to-token comparison. The previous works focus on dropping insignificant tokens to reduce the computational cost of ViTs. But when the dropping ratio increases, this hard manner will inevitably discard the vital tokens, which limits its efficiency. To solve the issue, we propose a generic self-slimmed learning approach for vanilla ViTs, namely SiT. Specifically, we first design a novel Token Slimming Module (TSM), which can boost the inference efficiency of ViTs by dynamic token aggregation. As a general method of token hard dropping, our TSM softly integrates redundant tokens into fewer informative ones. It can dynamically zoom visual attention without cutting off discriminative token relations in the images, even with a high slimming ratio. Furthermore, we introduce a concise Feature Recalibration Distillation (FRD) framework, wherein we design a reverse version of TSM (RTSM) to recalibrate the unstructured token in a flexible auto-encoder manner. Due to the similar structure between teacher and student, our FRD can effectively leverage structure knowledge for better convergence. Finally, we conduct extensive experiments to evaluate our SiT. It demonstrates that our method can speed up ViTs by 1.7x with negligible accuracy drop, and even speed up ViTs by 3.6x while maintaining 97% of their performance. Surprisingly, by simply arming LV-ViT with our SiT, we achieve new state-of-the-art performance on ImageNet. Code is available at https://github.com/Sense-X/SiT.
翻译:视觉变异器(ViTs)已成为流行结构,在各种视觉任务上,其速度速度网络比变异性神经网络(CNNs)要快。然而,如此强大的变异器带来了巨大的计算负担,因为光耗的象征性比对。先前的工作重点是丢弃微不足道的象征物,以降低ViTs的计算成本。但是,当下降比率上升时,这种艰难的方式将不可避免地丢弃重要象征物,这限制了它的效率。为了解决这个问题,我们提议对VanXlavla ViTs(即SiT)采用一种通用的自我滑动的学习方法。具体地说,我们首次设计了一个新型的 Token 攀升模块(TM),它能通过动态的象征性聚合来提高 ViXslimmmus的推断效率。作为一般的方法,我们的TSIM将多余的象征物软化地整合为更少的信息。它可以动态地放大视觉关注,而不会在图像中消除歧视象征物的关系,即使它是一个高的微缩率。此外,我们也可以在 Vireal Redial Redial Dial Dust (FRD) lax) 框架中以更灵活地保持它们的自我稳定化的图像结构,我们可以在SMSMSMSDral Stal Staldestral Stal-st-st-deal Stal Stal Stal Stal Stal) 结构中进行一个反演算。