Local Transformer-based classification models have recently achieved promising results with relatively low computational costs. However, the effect of aggregating spatial global information of local Transformer-based architecture is not clear. This work investigates the outcome of applying a global attention-based module named multi-resolution overlapped attention (MOA) in the local window-based transformer after each stage. The proposed MOA employs slightly larger and overlapped patches in the key to enable neighborhood pixel information transmission, which leads to significant performance gain. In addition, we thoroughly investigate the effect of the dimension of essential architecture components through extensive experiments and discover an optimum architecture design. Extensive experimental results CIFAR-10, CIFAR-100, and ImageNet-1K datasets demonstrate that the proposed approach outperforms previous vision Transformers with a comparatively fewer number of parameters.
翻译:以本地变压器为基础的分类模型最近以较低的计算成本取得了可喜的成果,然而,汇集基于本地变压器建筑的空间全球信息的效果并不明确,这项工作调查了在每个阶段之后在基于窗口的变压器中应用一个称为多分辨率重叠关注的全球关注模块的结果。拟议的MOA在关键部分使用略大和重叠的补丁,使邻里像素信息传输能够带来显著的性能收益。此外,我们通过广泛的实验彻底调查基本结构组成部分的层面的影响,并发现一个最佳的建筑设计。广泛的实验结果CIFAR-10、CIFAR-100和图像网-1K数据集表明,拟议的方法比以前的视觉变形器的参数数量要少一些。