This paper presents a practical and simple yet efficient method to effectively deal with the catastrophic forgetting for Class Incremental Learning (CIL) tasks. CIL tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data. Learning new knowledge in the absence of data instances from previous classes or even imbalance samples of both old and new classes makes CIL an ongoing challenging problem. These issues can be tackled by storing exemplars belonging to the previous tasks or by utilizing the rehearsal strategy. Inspired by the rehearsal strategy with the approach of using generative models, we propose ClaRe, an efficient solution for CIL by remembering the representations of learned classes in each increment. Taking this approach leads to generating instances with the same distribution of the learned classes. Hence, our model is somehow retrained from the scratch using a new training set including both new and the generated samples. Subsequently, the imbalance data problem is also solved. ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes. We comprehensively evaluate ClaRe on the MNIST benchmark. Results show a very low degradation on accuracy against facing new knowledge over time. Furthermore, contrary to the most proposed solutions, the memory limitation is not problematic any longer which is considered as a consequential issue in this research area.
翻译:本文介绍了一个实用而简单而有效的方法,以有效处理对升学(CIL)任务的灾难性遗忘。CIL倾向于完美地学习新概念,但并非以老数据的性能和准确性为代价。在没有前几类数据的情况下学习新知识,甚至新旧班和新班的不平衡抽样使CIL成为一个持续的挑战性问题。这些问题可以通过储存属于前几类任务的模版或利用排练战略来解决。在使用基因化模型的排练战略的启发下,我们建议ClaRe,这是CIL的高效解决方案,通过记住每类递增中学习班的表现形式。采用这一方法可以产生相同的情况。因此,利用包括新样本和新生成样本在内的新培训组合,我们的模式在某种程度上从抓起。随后,不平衡数据问题也可以通过储存属于前几类的模范或利用排练战略来解决。由于从先前的班级的分布中产生不同实例,Clare我们全面评价Clare关于MNIST的基准。结果显示,相对于面临新的记忆期的准确性差非常低,因此认为,这个研究中最有问题。