Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision. In this paper, we found that the similar holds in the continual learning con-text: contrastively learned representations are more robust against the catastrophic forgetting than jointly trained representations. Based on this novel observation, we propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations. More specifically, the proposed scheme (1) learns representations using the contrastive learning objective, and (2) preserves learned representations using a self-supervised distillation step. We conduct extensive experimental validations under popular benchmark image classification datasets, where our method sets the new state-of-the-art performance.
翻译:最近在自我监督学习方面的突破表明,这些算法学习的视觉表现比联合培训方法更能转移给无法预见的任务,而不需要依靠特定任务的监督。在本文中,我们发现,在持续学习的理论中,类似的情况是:反常学习的表示比联合培训的表示更能抵御灾难性的遗忘。 基于这一新的观点,我们提出一个基于演练的持续学习算法,侧重于持续学习和保持可转移的表示。更具体地说,拟议办法(1)利用对比性学习目标来学习表现,和(2)利用自我监督的蒸馏步骤保留学习的表示。我们在流行基准图像分类数据集下进行广泛的实验性验证,我们的方法设置了新的最新表现。