Object tracking (OT) aims to estimate the positions of target objects in a video sequence. Depending on whether the initial states of target objects are specified by provided annotations in the first frame or the categories, OT could be classified as instance tracking (e.g., SOT and VOS) and category tracking (e.g., MOT, MOTS, and VIS) tasks. Combing the advantages of the best practices developed in both communities, we propose a novel tracking-with-detection paradigm, where tracking supplements appearance priors for detection and detection provides tracking with candidate bounding boxes for association. Equipped with such a design, a unified tracking model, OmniTracker, is further presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline. Extensive experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
翻译:目标跟踪旨在估计视频序列中目标物体的位置。根据目标物体的初始状态是否由提供的注释或目标物体的类别来确定,目标跟踪可以分为实例跟踪(如SOT和VOS)和类别跟踪(如MOT、MOTS和VIS)任务。结合两个社区中最佳实践的优点,我们提出了一种新的跟踪与检测(tracking-with-detection)范式,其中跟踪补充了检测的外观先验知识,而检测则为跟踪提供了候选框进行关联。在此基础上,提出了一个统一的跟踪模型OmniTracker,该模型具有完全共享的网络架构、模型权重和推断流程,能够解决所有跟踪任务。对包括LaSOT、TrackingNet、DAVIS16-17、MOT17、MOTS20和YTVIS19在内的7个跟踪数据集进行了广泛的实验,结果表明OmniTracker的效果不亚于特定任务和统一跟踪模型,甚至更好。