Existing Visual Object Tracking (VOT) only takes the target area in the first frame as a template. This causes tracking to inevitably fail in fast-changing and crowded scenes, as it cannot account for changes in object appearance between frames. To this end, we revamped the tracking framework with Progressive Context Encoding Transformer Tracker (ProContEXT), which coherently exploits spatial and temporal contexts to predict object motion trajectories. Specifically, ProContEXT leverages a context-aware self-attention module to encode the spatial and temporal context, refining and updating the multi-scale static and dynamic templates to progressively perform accurately tracking. It explores the complementary between spatial and temporal context, raising a new pathway to multi-context modeling for transformer-based trackers. In addition, ProContEXT revised the token pruning technique to reduce computational complexity. Extensive experiments on popular benchmark datasets such as GOT-10k and TrackingNet demonstrate that the proposed ProContEXT achieves state-of-the-art performance.
翻译:视觉目标跟踪仅在第一帧中将目标区域视为模板,这使得在快速变化和拥挤的场景下跟踪不可避免地失败,因为它无法说明帧之间物体外观的变化。为此,我们通过渐进式上下文编码转换追踪器(ProContEXT)重新设计了跟踪框架,该框架能够协调利用空间和时间上下文以预测物体运动轨迹。具体而言,ProContEXT利用一个上下文感知的自注意力模块来编码空间和时间上下文,通过逐步改进和更新多尺度静态和动态模板来进行准确的跟踪。它探索了空间和时间上下文之间的互补性,为基于Transformer的跟踪器开辟了一种新的多上下文建模路径。此外,ProContEXT修订了令牌修剪技术以减少计算复杂度。对GOT-10k和TrackingNet等流行基准数据集的广泛实验表明,所提出的ProContEXT实现了最先进的性能。