Recently, Transformers have shown promising performance in various vision tasks. However, the high costs of global self-attention remain challenging for Transformers, especially for high-resolution vision tasks. Local self-attention runs attention computation within a limited region for the sake of efficiency, resulting in insufficient context modeling as their receptive fields are small. In this work, we introduce two new attention modules to enhance the global modeling capability of the hierarchical vision transformer, namely, random sampling windows (RS-Win) and important region windows (IR-Win). Specifically, RS-Win sample random image patches to compose the window, following a uniform distribution, i.e., the patches in RS-Win can come from any position in the image. IR-Win composes the window according to the weights of the image patches in the attention map. Notably, RS-Win is able to capture global information throughout the entire model, even in earlier, high-resolution stages. IR-Win enables the self-attention module to focus on important regions of the image and capture more informative features. Incorporated with these designs, RSIR-Win Transformer demonstrates competitive performance on common vision tasks.
翻译:近期,Transformer 在各种视觉任务中表现出不俗的性能。然而,全局自注意力的高计算成本仍然是 Transformer 的一个难题,尤其是对于高分辨率的视觉任务。局部自注意因其接受域较小而进行了局部的注意力计算,以求高效,但是这种方法导致了上下文建模的不足。在这项工作中,我们引入了两个新的注意力模块,以增强分层视觉Transformer的全局建模能力,即随机采样窗口(RS-Win)和重要区域窗口(IR-Win)。具体地,RS-Win 从图像中随机采样图像块以构成窗口,其采样遵循均匀分布,即RS-Win 中的块可以来自图像中的任何位置。IR-Win 根据注意力图像块的权重构成窗口。值得注意的是,RS-Win 能够在整个模型中捕捉全局信息,即使在早期的高分辨率阶段中也是如此。IR-Win 使自注意模块能够关注图像的重要区域并捕捉更丰富的特征。结合这些设计,RSIR-Win Transformer 在常见的视觉任务上表现出了竞争力的性能。