Incremental object detection (IOD) aims to train an object detector in phases, each with annotations for new object categories. As other incremental settings, IOD is subject to catastrophic forgetting, which is often addressed by techniques such as knowledge distillation (KD) and exemplar replay (ER). However, KD and ER do not work well if applied directly to state-of-the-art transformer-based object detectors such as Deformable DETR and UP-DETR. In this paper, we solve these issues by proposing a ContinuaL DEtection TRansformer (CL-DETR), a new method for transformer-based IOD which enables effective usage of KD and ER in this context. First, we introduce a Detector Knowledge Distillation (DKD) loss, focusing on the most informative and reliable predictions from old versions of the model, ignoring redundant background predictions, and ensuring compatibility with the available ground-truth labels. We also improve ER by proposing a calibration strategy to preserve the label distribution of the training set, therefore better matching training and testing statistics. We conduct extensive experiments on COCO 2017 and demonstrate that CL-DETR achieves state-of-the-art results in the IOD setting.
翻译:增量物体检测(IOD)旨在分阶段训练物体检测器,每个阶段都有新的对象类别注释。与其他增量设置一样,IOD容易受到灾难性遗忘的影响,通常会采用知识蒸馏(KD)和样本重放(ER)等技术进行处理。然而,如果直接应用于现代基于变压器的物体检测器(如可变形DETR和UP-DETR),则KD和ER效果不佳。在本文中,我们通过提出一种连续检测变压器(CL-DETR)来解决这些问题,这是一种新的变压器为基础的IOD方法,可以在此背景下有效地使用KD和ER。首先,我们引入了一种检测器知识蒸馏(DKD)损失,重点关注旧版本模型的最具信息和可靠性的预测,忽略冗余背景预测,确保与可用的标准标签的兼容性。我们还通过提出一种校准策略来改进ER,以保留训练集的标签分布,从而更好地匹配训练和测试统计数据。我们在COCO 2017上进行了广泛的实验,并证明了CL-DETR在IOD设置中实现了最先进的结果。