Kernel continual learning by \citet{derakhshani2021kernel} has recently emerged as a strong continual learner due to its non-parametric ability to tackle task interference and catastrophic forgetting. Unfortunately its success comes at the expense of an explicit memory to store samples from past tasks, which hampers scalability to continual learning settings with a large number of tasks. In this paper, we introduce generative kernel continual learning, which explores and exploits the synergies between generative models and kernels for continual learning. The generative model is able to produce representative samples for kernel learning, which removes the dependence on memory in kernel continual learning. Moreover, as we replay only on the generative model, we avoid task interference while being computationally more efficient compared to previous methods that need replay on the entire model. We further introduce a supervised contrastive regularization, which enables our model to generate even more discriminative samples for better kernel-based classification performance. We conduct extensive experiments on three widely-used continual learning benchmarks that demonstrate the abilities and benefits of our contributions. Most notably, on the challenging SplitCIFAR100 benchmark, with just a simple linear kernel we obtain the same accuracy as kernel continual learning with variational random features for one tenth of the memory, or a 10.1\% accuracy gain for the same memory budget.
翻译:最近,由于没有处理任务干扰和灾难性遗忘的参数性能力,因此,骨髓不断学习最近成为一个强有力的持续学习者。不幸的是,它的成败牺牲了存储过去任务样本的明确记忆,这妨碍了向大量任务需要重新显示的不断学习环境的可缩缩缩性。在本文件中,我们引入了基因内核持续学习,探索和利用基因化模型和内核之间的协同作用,以不断学习。基因化模型能够产生具有代表性的内核学习样本,从而消除内核不断学习对记忆的依赖。此外,由于我们仅重复基因化模型,我们避免任务干扰,而与以往需要重新显示整个模型的方法相比,我们在计算时效率更高。我们进一步引入一种监督的对比调节,使我们的模型能够产生更具有歧视性的样本,以便更好地以内核为基础的分类性业绩。我们广泛研究了三个广泛使用的不断学习基准,以显示我们贡献的能力和效益,从而消除内核内核持续学习对记忆的依赖性。最重要的是,我们仅重复使用第10级的精确性标准,我们获得一个简单的直线性预算基准。