Vision Transformers (ViT) serve as powerful vision models. Unlike convolutional neural networks, which dominated vision research in previous years, vision transformers enjoy the ability to capture long-range dependencies in the data. Nonetheless, an integral part of any transformer architecture, the self-attention mechanism, suffers from high latency and inefficient memory utilization, making it less suitable for high-resolution input images. To alleviate these shortcomings, hierarchical vision models locally employ self-attention on non-interleaving windows. This relaxation reduces the complexity to be linear in the input size; however, it limits the cross-window interaction, hurting the model performance. In this paper, we propose a new shift-invariant local attention layer, called query and attend (QnA), that aggregates the input locally in an overlapping manner, much like convolutions. The key idea behind QnA is to introduce learned queries, which allow fast and efficient implementation. We verify the effectiveness of our layer by incorporating it into a hierarchical vision transformer model. We show improvements in speed and memory complexity while achieving comparable accuracy with state-of-the-art models. Finally, our layer scales especially well with window size, requiring up-to x10 less memory while being up-to x5 faster than existing methods.
翻译:视觉变异器( VIT) 是一种强大的视觉模型。 与前几年以视觉研究为主的进化神经网络不同, 视觉变异器具有捕捉数据中远距离依赖性的能力。 然而, 任何变异器结构( 自留机制)的内在组成部分是任何变异器结构( 自我注意机制) 的内在部分, 其内存利用率高且低效率, 使得它更不适合高清晰度输入图像。 为了减轻这些缺陷, 高等级的视觉变异器在当地使用自留感器( VIT) 。 为了减轻这些缺陷, 本地的自留感模型在不互交错的窗口上使用自留感。 这种放松会降低输入大小的线性的复杂性; 但是, 它会限制跨窗口的互动, 伤害模型的性能。 在本文中, 我们提出了一个新的变异本地关注层( QNA), 称为“ QNA ” ( QNA) ( QNA) ( ) ( QNA) ( ) ( ) (, 即引入学习查询查询查询, 能够快速和高效执行。 我们通过将其纳入一个分级变异形变异器模型来验证我们的层次的有效性。 我们通过将它来验证我们的层次变异性变式模型来验证。 我们显示速度和记忆的复杂性和记忆的复杂性,,, 我们显示速度在与 和记忆的比X 10 ( ) ( ) (x级) ) ) (x级 ) (x ) ( ) ) ( ) ) 缩缩缩缩缩到比现有的大小。