Recently, DETR~\cite{carion2020end} pioneered the solution of vision tasks with transformers, it directly translates the image feature map into the object detection result. Though effective, translating the full feature map can be costly due to redundant computation on some area like the background. In this work, we encapsulate the idea of reducing spatial redundancy into a novel poll and pool (PnP) sampling module, with which we build an end-to-end PnP-DETR architecture that adaptively allocates its computation spatially to be more efficient. Concretely, the PnP module abstracts the image feature map into fine foreground object feature vectors and a small number of coarse background contextual feature vectors. The transformer models information interaction within the fine-coarse feature space and translates the features into the detection result. Moreover, the PnP-augmented model can instantly achieve various desired trade-offs between performance and computation with a single model by varying the sampled feature length, without requiring to train multiple models as existing methods. Thus it offers greater flexibility for deployment in diverse scenarios with varying computation constraint. We further validate the generalizability of the PnP module on \textbf{panoptic segmentation} and the recent transformer-based image recognition model {\textbf{ViT}}~\cite{dosovitskiy2020image} and show consistent efficiency gain. We believe our method makes a step for efficient visual analysis with transformers, wherein spatial redundancy is commonly observed. Code will be available at \url{https://github.com/twangnh/pnp-detr}.
翻译:最近, DETR ⁇ cite{carion2020end} 开创了变压器的愿景任务解决方案, 它直接将图像特征映射转换为对象检测结果。 尽管效果有效, 翻译完整功能映射会因在背景等某些区域进行冗余计算而成本高昂。 在这项工作中, 我们将减少空间冗余的想法包含在一个新的民意测验和集合( PnP) 抽样模块中, 我们据此构建一个终端到终端的 PnP- DETR 结构, 将它的空间计算分配到更有效率。 具体地说, PnP 模块将图像特征映射转换成精细的前方对象特性矢量和少量粗略背景背景特征矢量矢量矢量。 变动器模型在精细的特性空间中进行信息互动, 并将这些特性转换为检测结果。 此外, PnP-P 推荐模型可以立即通过一个单一模型实现各种预期的性差偏差, 并无需将多个模型作为现有方法加以培训。 因此, 它为不同情景的配置更大的前方图像矢量缩缩缩缩图矢度矢度矢度矢度矢度矢度矢度矢度矢量分析提供了最新的常规变缩图。 我们进一步校正校正校正的校正的校正校正的校正校正的校正的校正校正的校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正校正。