Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent advances in continual learning are restricted to supervised continual learning (SCL) scenarios. Consequently, they are not scalable to real-world applications where the data distribution is often biased and unannotated. In this work, we focus on unsupervised continual learning (UCL), where we learn the feature representations on an unlabelled sequence of tasks and show that reliance on annotated data is not necessary for continual learning. We conduct a systematic study analyzing the learned feature representations and show that unsupervised visual representations are surprisingly more robust to catastrophic forgetting, consistently achieve better performance, and generalize better to out-of-distribution tasks than SCL. Furthermore, we find that UCL achieves a smoother loss landscape through qualitative analysis of the learned representations and learns meaningful feature representations. Additionally, we propose Lifelong Unsupervised Mixup (LUMP), a simple yet effective technique that leverages the interpolation between the current task and previous tasks' instances to alleviate catastrophic forgetting for unsupervised representations.
翻译:持续学习(CL)旨在学习一系列任务,而不会忘记先前获得的知识。然而,在持续学习方面最近的进展仅限于有监督的连续学习(SCL)情景。因此,这些进展无法推广到数据分布往往偏差且没有附加说明的真实世界应用中。在这项工作中,我们把重点放在无监督的持续学习(UCL)上,我们学习了无标签任务序列的特征表现,并表明依靠附加说明的数据是持续学习的必要条件。我们进行系统研究,分析学到的特征表现,并表明未经监督的视觉表现在灾难性的遗忘、持续取得更好的业绩和普遍化到分配之外的任务方面异常强健。此外,我们发现ULLL通过对所学到的表述进行定性分析而实现更平稳的损失局面,并学习了有意义的特征表现。此外,我们提出“终身无监督的混合”是一种简单而有效的技术,利用当前任务和以往任务之间的内断来减轻无监督的灾难性的遗忘。