General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on four popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods. Code is available at https://github.com/lijincm/CoCa.
翻译:通用持续学习(GCL)的目的是从非独立和同样分布的流数据中学习,同时不灾难性地忘记在培训和测试阶段不依赖任务界限的旧任务。我们发现,这种关系和偏差是灾难性忘记的关键问题,其中关系偏差是指所有类别知识蒸馏中的关系不足,特征偏差是指所有类别之间差异性特征表现。为此,我们提出一个补充性校准框架(CoCaa),通过挖掘互补模型的产出和特征来减轻GCL进程中的两种偏差。具体地说,我们提出一种新的合作蒸馏方法,以解决关系偏差问题。通过利用关于新模式产出和保留产出的隐含的暗淡知识来蒸馏模型的产出,这种知识保持旧任务的表现以及平衡所有类别之间的关系。此外,我们探索一种协作性自我超强的构想,以利用借口任务和监督对比性学习所有类别完整和区别性特征偏差问题。我们对四种流行数据集进行广泛的实验,在四种流行数据集/Coqual-com框架下,在四种流行数据集/Civis/Crual-demas com 实现高级性业绩框架。