Deep neural network (DNN) suffers from catastrophic forgetting when learning incrementally, which greatly limits its applications. Although maintaining a handful of samples (called `exemplars`) of each task could alleviate forgetting to some extent, existing methods are still limited by the small number of exemplars since these exemplars are too few to carry enough task-specific knowledge, and therefore the forgetting remains. To overcome this problem, we propose to `imagine` diverse counterparts of given exemplars referring to the abundant semantic-irrelevant information from unlabeled data. Specifically, we develop a learnable feature generator to diversify exemplars by adaptively generating diverse counterparts of exemplars based on semantic information from exemplars and semantically-irrelevant information from unlabeled data. We introduce semantic contrastive learning to enforce the generated samples to be semantic consistent with exemplars and perform semanticdecoupling contrastive learning to encourage diversity of generated samples. The diverse generated samples could effectively prevent DNN from forgetting when learning new tasks. Our method does not bring any extra inference cost and outperforms state-of-the-art methods on two benchmarks CIFAR-100 and ImageNet-Subset by a clear margin.
翻译:深心神经网络(DNN)在不断学习时被灾难性地遗忘,这极大地限制了它的应用。虽然保留了每个任务中的少数样本(称为`外表')可以在某种程度上减轻忘却,但现有方法仍然受少量的外表的限制,因为这些外表太少,无法携带足够的特定任务知识,因此遗漏。为了克服这一问题,我们建议“想象”不同样本的对应方,它们提到来自无标签数据的大量语义相关信息,从而极大地限制其应用。具体地说,我们开发了一个可学习的特征生成器,通过根据外观和来自无标签数据的不同对应方的语义信息,使Explaers多样化。我们采用语义对比学习方法,使生成的样本与Exemplater一致,进行语义脱钩式对比学习,鼓励生成的样本的多样性。具体地说,生成的样本可以有效地防止DNNN在学习新任务时忘记D。我们的方法没有带来任何超额的图像基数和图像基数。我们的方法不会带来任何超额的图像基数。