Unsupervised lifelong learning refers to the ability to learn over time while memorizing previous patterns without supervision. Although great progress has been made in this direction, existing work often assumes strong prior knowledge about the incoming data (e.g., knowing the class boundaries) which can be impossible to obtain in complex and unpredictable environments. In this paper, motivated by real-world scenarios and the current studies, we propose a more practical problem setting called online self-supervised lifelong learning without prior knowledge. The proposed setting is challenging due to the non-iid and single-pass data, the absence of external supervision, and no prior knowledge. We conduct preliminary analyses and show that existing approaches fail to learn useful information in this setup. To address the challenges, we propose Self-Supervised ContrAstive Lifelong LEarning without Prior Knowledge (SCALE) which can extract and memorize representations on-the-fly purely from the data continuum. SCALE is designed around three major components: a pseudo-supervised contrastive loss, a self-supervised forgetting loss, and an online memory update for uniform subset selection. All three components are designed to work collaboratively to maximize learning performance. We perform comprehensive experiments of SCALE under iid and four non-iid data streams. The results show that SCALE outperforms the best state-of-the-art algorithm in all settings with improvements up to 3.83%, 2.77% and 5.86% in terms of kNN accuracy on CIFAR-10, CIFAR-100, and SubImageNet datasets.
翻译:不受监督的终身学习是指在不进行监督的情况下,在时间上学习的能力,同时记住以前的模式。虽然在这方面已经取得了很大进展,但现有的工作往往假定事先对在复杂和不可预测的环境中不可能获得的数据(例如,了解阶级界限)有很强的了解。在本文中,由于现实世界情景和当前研究的推动,我们提出了一个更实际的问题设置,称为在线自我监督的终身学习,而没有事先的知识。由于非二版和单版的数据、缺乏外部监督以及没有先前的知识,拟议的设置具有挑战性。我们进行了初步分析,并表明现有方法无法在这一设置中学习有用的信息。为了应对挑战,我们提议在没有事先知识的情况下,自行超超级运行孔德拉蒂·里昂。我们可以从数据连续操作中提取和模拟飞行的演示。SCALE围绕三大主要组成部分设计:假超超超超级对比损失、自我监督的遗忘损失,以及用于统一子集级选择的在线记忆更新。我们用四大比例的系统内部数据测试,根据四大比例的测试,我们根据连续四大比例数据测试进行最高级的运行。