Traditional object detection are ill-equipped for incremental learning. However, fine-tuning directly on a well-trained detection model with only new data will leads to catastrophic forgetting. Knowledge distillation is a straightforward way to mitigate catastrophic forgetting. In Incremental Object Detection (IOD), previous work mainly focuses on feature-level knowledge distillation, but the different response of detector has not been fully explored yet. In this paper, we propose a fully response-based incremental distillation method focusing on learning response from detection bounding boxes and classification predictions. Firstly, our method transferring category knowledge while equipping student model with the ability to retain localization knowledge during incremental learning. In addition, we further evaluate the qualities of all locations and provides valuable response by adaptive pseudo-label selection (APS) strategies. Finally, we elucidate that knowledge from different responses should be assigned with different importance during incremental distillation. Extensive experiments conducted on MS COCO demonstrate significant advantages of our method, which substantially narrow the performance gap towards full training.
翻译:传统物体探测设备不足,难以进行渐进学习。然而,直接对经过良好训练的探测模型进行微调,只有新数据才能导致灾难性的遗忘。知识蒸馏是减轻灾难性遗忘的直截了当的方法。在递增物体探测(IOD)中,先前的工作主要侧重于地平级知识蒸馏,但探测器的不同反应尚未得到充分探讨。在本文件中,我们建议采用完全基于反应的递增蒸馏方法,侧重于从探测捆绑盒和分类预测中学习反应。首先,我们的方法转移类别知识,同时使学生模型有能力在递增学习期间保留本地化知识。此外,我们进一步评估所有地点的质量,并以适应性伪标签选择战略提供宝贵的反应。最后,我们阐明不同反应的知识在递增蒸馏过程中应具有不同的重要性。在MS COCOCO上进行的广泛实验显示了我们方法的重大优势,大大缩小了绩效差距,以全面培训。