Exemplar-free incremental learning is extremely challenging due to inaccessibility of data from old tasks. In this paper, we attempt to exploit the knowledge encoded in a previously trained classification model to handle the catastrophic forgetting problem in continual learning. Specifically, we introduce a so-called knowledge delegator, which is capable of transferring knowledge from the trained model to a randomly re-initialized new model by generating informative samples. Given the previous model only, the delegator is effectively learned using a self-distillation mechanism in a data-free manner. The knowledge extracted by the delegator is then utilized to maintain the performance of the model on old tasks in incremental learning. This simple incremental learning framework surpasses existing exemplar-free methods by a large margin on four widely used class incremental benchmarks, namely CIFAR-100, ImageNet-Subset, Caltech-101 and Flowers-102. Notably, we achieve comparable performance to some exemplar-based methods without accessing any exemplars.
翻译:由于无法从旧任务中获取数据,因此免费的增量学习极具挑战性。 在本文中,我们试图利用先前经过培训的分类模型中编码的知识,在持续学习中处理灾难性的遗忘问题。具体地说,我们引入了所谓的知识代言人,它能够通过生成信息样本,将知识从经过培训的模型转移到随机重新创建的新模型中。仅鉴于前一种模式,代言人就能够以无数据方式使用自我蒸馏机制有效地学到知识。然后,代言人获取的知识被用来保持旧任务模式在渐进学习中的性能。这个简单的递增学习框架在四类广泛使用的递增基准(即CIFAR-100、图像网络-子集、Caltech-101和花卉102)上,大大超过现有的无源方法。 值得注意的是,我们取得了与一些以实例为基础的方法的类似性能,而没有获得任何Exemplaers。