While state-of-the-art vision transformer models achieve promising results for image classification, they are computationally very expensive and require many GFLOPs. Although the GFLOPs of a vision transformer can be decreased by reducing the number of tokens in the network, there is no setting that is optimal for all input images. In this work, we, therefore, introduce a differentiable parameter-free Adaptive Token Sampling (ATS) module, which can be plugged into any existing vision transformer architecture. ATS empowers vision transformers by scoring and adaptively sampling significant tokens. As a result, the number of tokens is not anymore static but it varies for each input image. By integrating ATS as an additional layer within current transformer blocks, we can convert them into much more efficient vision transformers with an adaptive number of tokens. Since ATS is a parameter-free module, it can be added to off-the-shelf pretrained vision transformers as a plug-and-play module, thus reducing their GFLOPs without any additional training. However, due to its differentiable design, one can also train a vision transformer equipped with ATS. We evaluate our module on the ImageNet dataset by adding it to multiple state-of-the-art vision transformers. Our evaluations show that the proposed module improves the state-of-the-art by reducing the computational cost (GFLOPs) by 37% while preserving the accuracy.
翻译:虽然最先进的视觉变压器模型在图像分类方面取得了有希望的结果,但计算成本非常昂贵,需要许多GFLOP。虽然视觉变压器的GFLOP可以通过减少网络中的批量来减少GFLOP,但对于所有输入图像来说,没有最合适的设置。因此,在这项工作中,我们引入了一个可不使用参数的无参数可调适 Token抽样(ATS)模块,该模块可以插入任何现有的视觉变压器结构中。ASTS通过评分和适应性地采样重要标志来增强视觉变压器的准确性。结果是,标牌的数量不再是静止的,而是每种输入图像的变压器。通过将苯丙胺类兴奋剂作为额外的变压层,我们可以将其转换成效率高的视觉变压器,并具有调适量的代号。由于ATS是一个无参数的模块,因此可以添加到那些过时的预设的视觉变压器,因此无需接受任何额外的培训而降低其GFLOPs。然而,由于它的设计不同,因此,我们也可以将图像变换成一个配置的模型模块,我们用ASSDL。