Despite the recent advances in the field of object detection, common architectures are still ill-suited to incrementally detect new categories over time. They are vulnerable to catastrophic forgetting: they forget what has been already learned while updating their parameters in absence of the original training data. Previous works extended standard classification methods in the object detection task, mainly adopting the knowledge distillation framework. However, we argue that object detection introduces an additional problem, which has been overlooked. While objects belonging to new classes are learned thanks to their annotations, if no supervision is provided for other objects that may still be present in the input, the model learns to associate them to background regions. We propose to handle these missing annotations by revisiting the standard knowledge distillation framework. Our approach outperforms current state-of-the-art methods in every setting of the Pascal-VOC dataset. We further propose an extension to instance segmentation, outperforming the other baselines. Code can be found here: https://github.com/fcdl94/MMA
翻译:尽管在物体探测领域最近取得了进步,但普通建筑仍然不适合随着时间推移逐步探测新的类别。它们很容易被灾难性地遗忘:它们忘记了在没有原始培训数据的情况下更新参数时已经学到的东西;以前的工作扩大了物体探测任务的标准分类方法,主要是采用知识蒸馏框架。然而,我们争辩说,物体探测带来了另一个问题,这个问题已被忽略了。由于它们的注释,属于新类的物体已经学会了。如果对输入中可能还存在的其他物体没有进行监督,则该模型会把它们与背景区域联系起来。我们提议通过重新审视标准知识蒸馏框架来处理这些缺失的说明。我们的方法在帕斯卡尔-VOC数据集的每一个环境中都优于目前的最新方法。我们进一步提议扩展实例分割,优于其他基线。这里可以找到代码: https://github.com/fcdl94/MMA。