Vision Transformer (ViT) has achieved remarkable performance in many vision tasks. However, ViT is inferior to convolutional neural networks (CNNs) when targeting high-resolution mobile vision applications. The key computational bottleneck of ViT is the softmax attention module which has quadratic computational complexity with the input resolution. It is essential to reduce the cost of ViT to deploy it on edge devices. Existing methods (e.g., Swin, PVT) restrict the softmax attention within local windows or reduce the resolution of key/value tensors to reduce the cost, which sacrifices ViT's core advantages on global feature extractions. In this work, we present EfficientViT, an efficient ViT architecture for high-resolution low-computation visual recognition. Instead of restricting the softmax attention, we propose to replace softmax attention with linear attention while enhancing its local feature extraction ability with depthwise convolution. EfficientViT maintains global and local feature extraction capability while enjoying linear computational complexity. Extensive experiments on COCO object detection and Cityscapes semantic segmentation demonstrate the effectiveness of our method. On the COCO dataset, EfficientViT achieves 42.6 AP with 4.4G MACs, surpassing EfficientDet-D1 by 2.4 AP while having 27.9% fewer MACs. On Cityscapes, EfficientViT reaches 78.7 mIoU with 19.1G MACs, outperforming SegFormer by 2.5 mIoU while requiring less than 1/3 the computational cost. On Qualcomm Snapdragon 855 CPU, EfficientViT is 3x faster than EfficientNet while achieving higher ImageNet accuracy.
翻译:视觉变异器( VIT) 在许多视觉任务中取得了显著的绩效。 然而, VIT 在瞄准高分辨率移动视觉应用时,不如进取神经神经网络( CNNs) 。 VIT 的关键计算瓶颈是软式注意模块, 与输入解析具有二次计算复杂性。 降低 VIT 在边端设备上部署它的成本至关重要。 现有的方法( 如 Swin, PVT) 限制本地窗口内的软式关注, 或降低关键/ 价值电压的分辨率, 以降低成本, 从而牺牲 ViT 在全球地物效率提取中的核心优势。 在此工作中, 我们展示了高效 ViT, 这是高效的 ViT 结构, 用于高分辨率低的低光化视觉识别。 我们提议用线性关注来取代软性注意力, 同时用深度的调导力提高本地地特征提取能力。 高效的 ViLT 将CO 检测和城市的精度降低 CMAI 3, 而以稳定性化的 ALG 。