While state-of-the-art vision transformer models achieve promising results in image classification, they are computationally expensive and require many GFLOPs. Although the GFLOPs of a vision transformer can be decreased by reducing the number of tokens in the network, there is no setting that is optimal for all input images. In this work, we therefore introduce a differentiable parameter-free Adaptive Token Sampler (ATS) module, which can be plugged into any existing vision transformer architecture. ATS empowers vision transformers by scoring and adaptively sampling significant tokens. As a result, the number of tokens is not constant anymore and varies for each input image. By integrating ATS as an additional layer within the current transformer blocks, we can convert them into much more efficient vision transformers with an adaptive number of tokens. Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training. Moreover, due to its differentiable design, one can also train a vision transformer equipped with ATS. We evaluate the efficiency of our module in both image and video classification tasks by adding it to multiple SOTA vision transformers. Our proposed module improves the SOTA by reducing their computational costs (GFLOPs) by 2X, while preserving their accuracy on the ImageNet, Kinetics-400, and Kinetics-600 datasets.
翻译:虽然最先进的视觉变压器模型在图像分类方面取得了有希望的结果,但是它们计算成本很高,需要许多GFLOP。 虽然通过减少网络中的批量,可以减少视觉变压器GFLOP的GFLOP数量,但对于所有输入图像没有最佳的设置。 因此,在这项工作中,我们引入了一种可不使用参数的无参数可调制调制托肯采样器(ATS)模块,该模块可以插入任何现有的视觉变压器结构中。ASTS通过评分和适应性地取样重要标志来增强视觉变压器的能力。因此,每个输入图像的标牌数量不再固定,而且各不相同。通过将苯丙胺类兴奋剂作为额外的一层增加层,我们可以将ASTA作为更高效的视觉变压器转换成一个更高效的图像变压器,我们用SOFOTA模型来降低其图像变压率。 我们用SOTA的图像变压器来评估它们变压的图像,我们用SOTA的变压式模块来降低其变压。