Compared with previous two-stream trackers, the recent one-stream tracking pipeline, which allows earlier interaction between the template and search region, has achieved a remarkable performance gain. However, existing one-stream trackers always let the template interact with all parts inside the search region throughout all the encoder layers. This could potentially lead to target-background confusion when the extracted feature representations are not sufficiently discriminative. To alleviate this issue, we propose a generalized relation modeling method based on adaptive token division. The proposed method is a generalized formulation of attention-based relation modeling for Transformer tracking, which inherits the merits of both previous two-stream and one-stream pipelines whilst enabling more flexible relation modeling by selecting appropriate search tokens to interact with template tokens. An attention masking strategy and the Gumbel-Softmax technique are introduced to facilitate the parallel computation and end-to-end learning of the token division module. Extensive experiments show that our method is superior to the two-stream and one-stream pipelines and achieves state-of-the-art performance on six challenging benchmarks with a real-time running speed.
翻译:相较于之前的双流跟踪器来说,最近的单流追踪管道允许模板和搜索区域之间的早期交互,从而实现了显著的性能提升。然而,现有的单流追踪器总是让模板在所有编码器层中与搜索区域内的所有部分交互。如果提取的特征表示不足够具有区分性,这可能潜在地导致目标背景混淆。为了缓解这个问题,我们提出了一种基于自适应令牌划分的通用关系建模方法。该方法是基于注意力机制的Transformer追踪的通用模型,它继承了以前的双流和单流追踪管道的优点,同时通过选择适当的搜索令牌与模板令牌交互,使关系建模更加灵活。引入注意力屏蔽策略和Gumbel-Softmax技术,以便于令牌划分模块的并行计算和端到端学习。广泛的实验表明,我们的方法优于双流和单流管道,并在六个具有挑战性的基准测试中实现了最先进的性能,并具有实时运行速度。