We present a novel transformer-based architecture for global multi-object tracking. Our network takes a short sequence of frames as input and produces global trajectories for all objects. The core component is a global tracking transformer that operates on objects from all frames in the sequence. The transformer encodes object features from all frames, and uses trajectory queries to group them into trajectories. The trajectory queries are object features from a single frame and naturally produce unique trajectories. Our global tracking transformer does not require intermediate pairwise grouping or combinatorial association, and can be jointly trained with an object detector. It achieves competitive performance on the popular MOT17 benchmark, with 75.3 MOTA and 59.1 HOTA. More importantly, our framework seamlessly integrates into state-of-the-art large-vocabulary detectors to track any objects. Experiments on the challenging TAO dataset show that our framework consistently improves upon baselines that are based on pairwise association, outperforming published works by a significant 7.7 tracking mAP. Code is available at https://github.com/xingyizhou/GTR.
翻译:我们为全球多对象跟踪提供了一个新型的变压器结构。 我们的网络使用一个短框架序列作为输入, 并为所有天体制作全球轨迹。 核心组件是一个全球跟踪变压器, 在序列中从所有框架的物体上运行。 变压器将所有框架的物体特征编码, 并使用轨迹查询将其分组成轨迹。 轨迹查询是单一框架的物体特征, 并自然生成独特的轨迹。 我们的全球跟踪变压器不需要中间对称组合或组合组合组合, 并且可以与一个对象探测器共同培训。 它在流行的 MOT17 基准上实现了竞争性性能, 包括75.3 MOTA 和 59.1 HONTA。 更重要的是, 我们的框架无缝地整合到最新的大型语音探测器中以跟踪任何物体。 对具有挑战的 TAO 数据集的实验显示, 我们的框架在基于双向组合的基线上不断改进, 超过一个显著的7.7 跟踪 mAP 的已公布的作品。 代码可在 https://github.com/xingyuz/GTR。