End-to-end Object Detection with Transformer (DETR)proposes to perform object detection with Transformer and achieve comparable performance with two-stage object detection like Faster-RCNN. However, DETR needs huge computational resources for training and inference due to the high-resolution spatial input. In this paper, a novel variant of transformer named Adaptive Clustering Transformer(ACT) has been proposed to reduce the computation cost for high-resolution input. ACT cluster the query features adaptively using Locality Sensitive Hashing (LSH) and ap-proximate the query-key interaction using the prototype-key interaction. ACT can reduce the quadratic O(N2) complexity inside self-attention into O(NK) where K is the number of prototypes in each layer. ACT can be a drop-in module replacing the original self-attention module without any training. ACT achieves a good balance between accuracy and computation cost (FLOPs). The code is available as supplementary for the ease of experiment replication and verification. Code is released at \url{https://github.com/gaopengcuhk/SMCA-DETR/}
翻译:以变异器进行终端到终端物体探测(DETR), 以使用变异器进行物体探测, 并用像Apper- RCNN这样的两阶段物体探测实现类似性能。 但是, DETR需要大量计算资源, 用于高分辨率空间输入的培训和推断。 在本文中, 提议了一个叫作适应性组合变异器的新变异器, 以降低高分辨率输入的计算成本。 ACT 将查询特征通过使用本地敏感散列( LSH) 和使用原型钥匙互动的查询- 键互动进行适应性化组合。 ACT 可以在 K 是每个层原型数的 O( NK ) 中降低 等离子 O( N2 ) 复杂度 。 ACT 可以在不经过任何培训的情况下替换原自留模块 。 ACT 在精度和计算成本( FLOPs) 之间实现良好的平衡。 该代码可以作为补充, 以方便实验复制和校验。 代码公布在 https://github.com/gaopencopenc/ SMA- TR}