Firstly, a new multi-object tracking framework is proposed in this paper based on multi-modal fusion. By integrating object detection and multi-object tracking into the same model, this framework avoids the complex data association process in the classical TBD paradigm, and requires no additional training. Secondly, confidence of historical trajectory regression is explored, possible states of a trajectory in the current frame (weak object or strong object) are analyzed and a confidence fusion module is designed to guide non-maximum suppression of trajectory and detection for ordered association. Finally, extensive experiments are conducted on the KITTI and Waymo datasets. The results show that the proposed method can achieve robust tracking by using only two modal detectors and it is more accurate than many of the latest TBD paradigm-based multi-modal tracking methods. The source codes of the proposed method are available at https://github.com/wangxiyang2022/YONTD-MOT
翻译:本文提出了一种基于多模式融合的新型多目标跟踪框架。该框架将目标检测和多目标跟踪集成到同一模型中,避免了传统TBD范式中复杂的数据关联过程,并且不需要额外的训练。其次,探索历史轨迹回归的置信度,分析当前帧中轨迹可能的状态(弱目标或强目标),设计置信度融合模块以指导轨迹和检测的非极大值抑制以完成有序关联。最后,在KITTI和Waymo数据集上进行了大量实验。结果表明,所提出的方法可以通过使用仅两个模式检测器实现稳健跟踪,并且比许多最新的基于TBD范式的多模式跟踪方法更准确。所提出方法的源代码可在https://github.com/wangxiyang2022/YONTD-MOT获取。