The dynamic expansion architecture is becoming popular in class incremental learning, mainly due to its advantages in alleviating catastrophic forgetting. However, task confusion is not well assessed within this framework, e.g., the discrepancy between classes of different tasks is not well learned (i.e., inter-task confusion, ITC), and certain priority is still given to the latest class batch (i.e., old-new confusion, ONC). We empirically validate the side effects of the two types of confusion. Meanwhile, a novel solution called Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks. TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one. It establishes information flow paths at both feature and logit levels, enabling the learning to be aware of old classes. Besides, attention mechanism and classifier re-scoring are applied to generate more fair classification scores. We conduct extensive experiments on CIFAR100 and ImageNet100 datasets. The results demonstrate that TCIL consistently achieves state-of-the-art accuracy. It mitigates both ITC and ONC, while showing advantages in battle with catastrophic forgetting even no rehearsal memory is reserved.
翻译:动态扩张结构在课堂递增学习中越来越受欢迎,这主要是因为它有助于减轻灾难性的忘却。然而,在本框架内,任务混淆没有很好地评估,例如,不同任务类别之间的差异没有很好地了解(即任务间混乱、国贸中心),某些优先事项仍然放在最新的类别(即旧的混乱、ONC)上。我们从经验中验证了两种类型混淆的副作用。与此同时,提议了一个称为任务相关递增学习的新解决办法,以鼓励在各项任务之间有区别和公平地利用特征。TCIL进行多层次的知识蒸馏,将从旧任务学到的知识传播到新任务。它在特性和日志两级建立了信息流动路径,使学习能够了解旧类(即旧的混乱、ONC),此外,还运用了关注机制和分类再分类,以产生更公平的分类分数。我们在CIFAR100和图像网100数据集上进行了广泛的实验。结果显示,TCIL始终在取得最新水平的准确性。它减轻了ITC和ONC的记忆前期,同时展示了在战争中留下的优势。