Vision Transformers (ViTs) have triggered the most recent and significant breakthroughs in computer vision. Their efficient designs are mostly guided by the indirect metric of computational complexity, i.e., FLOPs, which however has a clear gap with the direct metric such as throughput. Thus, we propose to use the direct speed evaluation on the target platform as the design principle for efficient ViTs. Particularly, we introduce LITv2, a simple and effective ViT which performs favourably against the existing state-of-the-art methods across a spectrum of different model sizes with faster speed. At the core of LITv2 is a novel self-attention mechanism, which we dub HiLo. HiLo is inspired by the insight that high frequencies in an image capture local fine details and low frequencies focus on global structures, whereas a multi-head self-attention layer neglects the characteristic of different frequencies. Therefore, we propose to disentangle the high/low frequency patterns in an attention layer by separating the heads into two groups, where one group encodes high frequencies via self-attention within each local window, and another group encodes low frequencies by performing global attention between the average-pooled low-frequency keys and values from each window and each query position in the input feature map. Benefiting from the efficient design for both groups, we show that HiLo is superior to the existing attention mechanisms by comprehensively benchmarking FLOPs, speed and memory consumption on GPUs and CPUs. For example, HiLo is 1.4x faster than spatial reduction attention and 1.6x faster than local window attention on CPUs. Powered by HiLo, LITv2 serves as a strong backbone for mainstream vision tasks including image classification, dense detection and segmentation. Code is available at https://github.com/ziplab/LITv2.
翻译:视觉变异器(ViTs)引发了计算机视觉的最新重大突破。 他们的高效设计大多以计算复杂性的间接衡量标准为指导, 即FLOPs, 与直接衡量标准( 如吞吐) 存在明显差距。 因此, 我们提议在目标平台上使用直接速度评价作为高效 ViTs 的设计原则。 特别是, 我们引入了LITv2, 一种简单而有效的VIT2, 与当前最先进的速度模型范围不同, 速度更快。 在LITv2 的核心是一个全新的自定义自定义的自我识别机制, 我们调出HiLo。 HiLOPs。 受启发的是图像中的高频率捕捉到本地精度细节, 低频率聚焦于全球结构, 而多头自省自留层忽略不同频率的频率。 因此, 我们建议将高/ 频率的频率模式分解在关注层, 将头部分为两组, 通过每个本地窗口的自读存储器进行高频率分类, 而另一组则将低频率的视野定位, 显示我们每个中位的C值的低位, 递化的低频率 。