Despite the recent advances in the field of object detection, common architectures are still ill-suited to incrementally detect new categories over time. They are vulnerable to catastrophic forgetting: they forget what has been already learned while updating their parameters in absence of the original training data. Previous works extended standard classification methods in the object detection task, mainly adopting the knowledge distillation framework. However, we argue that object detection introduces an additional problem, which has been overlooked. While objects belonging to new classes are learned thanks to their annotations, if no supervision is provided for other objects that may still be present in the input, the model learns to associate them to background regions. We propose to handle these missing annotations by revisiting the standard knowledge distillation framework. Our approach outperforms current state-of-the-art methods in every setting of the Pascal-VOC dataset. We further propose an extension to instance segmentation, outperforming the other baselines. In this work, we propose to handle the missing annotations by revisiting the standard knowledge distillation framework. We show that our approach outperforms current state-of-the-art methods in every setting of the Pascal-VOC 2007 dataset. Moreover, we propose a simple extension to instance segmentation, showing that it outperforms the other baselines.
翻译:尽管在物体探测领域最近取得了进步,但普通建筑仍然不适合随着时间推移逐步探测新的类别。它们很容易被灾难性地忘记:它们忘记了在没有原始培训数据的情况下更新参数时已经学到的东西;以前的工作扩大了物体探测任务的标准分类方法,主要是采用知识蒸馏框架。然而,我们争辩说,物体探测带来了另一个问题,这个问题已经被忽视了。虽然属于新类的物体由于它们的注释而学习,如果对输入中可能还存在的其他物体没有进行监督,模型就会学习将其与背景区域联系起来。我们提议通过重新审视标准知识蒸馏框架来处理这些缺失的说明。我们的方法在帕斯卡尔-VOC数据集的每个设置中都超越了目前的最新标准分类方法。我们进一步提议扩展实例分解,优于其他基线。在这项工作中,我们提议通过重新审视标准知识蒸馏框架来处理缺失的注释。我们显示我们的方法优于当前状态,在每一个设置的Pascal-C基线中,我们提议在显示2007年的每个设置中都显示一个简单的分级数据扩展。