In the field of multi-object tracking (MOT), recent Transformer based end-to-end models like MOTR have demonstrated exceptional performance on datasets such as DanceTracker. However, the computational demands of these models present challenges in training and deployment. Drawing inspiration from successful models like GPT, we present MO-YOLO, an efficient and computationally frugal end-to-end MOT model. MO-YOLO integrates principles from You Only Look Once (YOLO) and RT-DETR, adopting a decoder-only approach. By leveraging the decoder from RT-DETR and architectural components from YOLOv8, MO-YOLO achieves high speed, shorter training times, and proficient MOT performance. On the Dancetrack, MO-YOLO not only matches MOTR's performance but also surpasses it, achieving over twice the frames per second (MOTR 9.5 FPS, MO-YOLO 19.6 FPS). Furthermore, MO-YOLO demonstrates significantly reduced training times and lower hardware requirements compared to MOTR. This research introduces a promising paradigm for efficient end-to-end MOT, emphasizing enhanced performance and resource efficiency.
翻译:暂无翻译