Transformers have been successful in many vision tasks, thanks to their capability of capturing long-range dependency. However, their quadratic computational complexity poses a major obstacle for applying them to vision tasks requiring dense predictions, such as object detection, feature matching, stereo, etc. We introduce QuadTree Attention, which reduces the computational complexity from quadratic to linear. Our quadtree transformer builds token pyramids and computes attention in a coarse-to-fine manner. At each level, the top K patches with the highest attention scores are selected, such that at the next level, attention is only evaluated within the relevant regions corresponding to these top K patches. We demonstrate that quadtree attention achieves state-of-the-art performance in various vision tasks, e.g. with 4.0% improvement in feature matching on ScanNet, about 50% flops reduction in stereo matching, 0.4-1.5% improvement in top-1 accuracy on ImageNet classification, 1.2-1.8% improvement on COCO object detection, and 0.7-2.4% improvement on semantic segmentation over previous state-of-the-art transformers. The codes are available at https://github.com/Tangshitao/QuadtreeAttention}{https://github.com/Tangshitao/QuadtreeAttention.
翻译:由于能够捕捉长距离依赖性,变形器在许多视觉任务中都取得了成功。然而,它们的二次计算复杂性对于将它们应用到需要密集预测的视觉任务,例如物体探测、特征匹配、立体等,构成了一个重大障碍。 我们引入了QuadTree 注意, 将计算复杂性从二次变线降低到线性。 我们的四叶变形器以粗略到线性的方式建立了象征性的金字塔, 以粗略的方式计算了注意力。 在每个级别, 都选择了关注分数最高的顶级 K 补丁, 从而在下一个级别, 仅对与这些顶级K 补形相匹配的相关区域进行关注度评估。 我们证明四叶的注意在各种视觉任务中达到最先进的表现, 例如, 使ScanNet的功能匹配率提高了4.0%, 立体匹配率降低了约50%, 图像网分类的顶级一精度提高了0.4-1.5 %, CO对象探测改进了1.2-1.8% 目标值, 而在下一个级别上, 也只对与这些顶级K级变形图/AtreastistereabasionAmatoshimax/Q.