Conventional detection networks usually need abundant labeled training samples, while humans can learn new concepts incrementally with just a few examples. This paper focuses on a more challenging but realistic class-incremental few-shot object detection problem (iFSD). It aims to incrementally transfer the model for novel objects from only a few annotated samples without catastrophically forgetting the previously learned ones. To tackle this problem, we propose a novel method LEAST, which can transfer with Less forgetting, fEwer training resources, And Stronger Transfer capability. Specifically, we first present the transfer strategy to reduce unnecessary weight adaptation and improve the transfer capability for iFSD. On this basis, we then integrate the knowledge distillation technique using a less resource-consuming approach to alleviate forgetting and propose a novel clustering-based exemplar selection process to preserve more discriminative features previously learned. Being a generic and effective method, LEAST can largely improve the iFSD performance on various benchmarks.
翻译:常规探测网络通常需要大量的标签培训样本,而人类则只需举几个例子就可以逐步学习新概念。本文侧重于一个更具挑战性但更现实的级级增量微小的物体探测问题(iFSD ) 。 其目的是在不忘记以前学到的样本的情况下,从几个附加说明的样本中逐步转让新物品模型,不至于忘记以前学到的样本。 为了解决这一问题,我们提出了一种新的方法LEART, 它可以以较少的遗忘、 更低的培训资源、 更强的传输能力来转让。 具体地说, 我们首先提出转移战略, 以减少不必要的重量适应, 提高iFSD 的转移能力。 在此基础上, 我们然后采用资源消耗较少的方法整合知识蒸馏技术, 以缓解遗忘, 并提出基于集群的新型例外选择程序, 以保存以前学到的更多歧视特征。 作为一项通用和有效的方法, LEAST可以极大地改进各种基准的iFSD绩效。