The challenge of the Class Incremental Learning~(CIL) lies in difficulty for a learner to discern the old classes' data from the new as no previous classes' data is preserved. In this paper, we reveal three causes for catastrophic forgetting at the representational level, namely, representation forgetting, representation overlapping, and classifier deviation. Based on the observation above, we propose a new CIL framework, Contrastive Class Concentration for CIL (C4IL) to alleviate the phenomenon of representation overlapping that works in both memory-based and memory-free methods. Our framework leverages the class concentration effect of contrastive representation learning, therefore yielding a representation distribution with better intra-class compatibility and inter-class separability. Quantitative experiments showcase the effectiveness of our framework: it outperforms the baseline methods by 5% in terms of the average and top-1 accuracy in 10-phase and 20-phase CIL. Qualitative results also demonstrate that our method generates a more compact representation distribution that alleviates the overlapping problem.
翻译:类递增学习~(CIL)的挑战在于,学习者很难从新的类别中辨别旧类的数据,因为没有保留前类的数据。在本文中,我们揭示了在代表性层面造成灾难性忘记的三个原因,即代表性的忘记、代表性重叠和分类偏差。根据上述意见,我们提议一个新的CIL框架,即CIL的对比等级集中,以缓解在记忆和记忆-无影响方法中都起作用的代表性重叠现象。我们的框架利用了对比性代表性学习的等级集中效应,从而产生一种更符合阶级内部兼容性和阶级间分离性的代表性分布。定量实验展示了我们框架的有效性:它以10阶段和20阶段的CIL的平均和上一级精度比基线方法高出5%。 定性结果还表明,我们的方法产生了更加紧凑的代表性分布,从而缓解重叠的问题。