Unsupervised lifelong learning refers to the ability to learn over time while memorizing previous patterns without supervision. Previous works assumed strong prior knowledge about the incoming data (e.g., knowing the class boundaries) which can be impossible to obtain in complex and unpredictable environments. In this paper, motivated by real-world scenarios, we formally define the online unsupervised lifelong learning problem with class-incremental streaming data, which is non-iid and single-pass. The problem is more challenging than existing lifelong learning problems due to the absence of labels and prior knowledge. To address the issue, we propose Self-Supervised ContrAstive Lifelong LEarning (SCALE) which extracts and memorizes knowledge on-the-fly. SCALE is designed around three major components: a pseudo-supervised contrastive loss, a self-supervised forgetting loss, and an online memory update for uniform subset selection. All three components are designed to work collaboratively to maximize learning performance. Our loss functions leverage pairwise similarity thus remove the dependency on supervision or prior knowledge. We perform comprehensive experiments of SCALE under iid and four non-iid data streams. SCALE outperforms the best state-of-the-art algorithm on all settings with improvements of up to 6.43%, 5.23% and 5.86% kNN accuracy on CIFAR-10, CIFAR-100 and SubImageNet datasets.
翻译:不受监督的终身学习是指在不监督的情况下在背着先前模式的同时长期学习的能力。 先前的工作假设了在复杂和不可预测的环境中不可能获得的关于接收数据( 例如, 了解阶级边界) 的高级知识。 在本文中, 由现实世界情景的驱动, 我们正式定义了不受监督的在线终身学习问题, 使用非二次和单一路径的等级递增流数据。 由于缺乏标签和先前的知识, 这个问题比现有的终身学习问题更具挑战性。 为了解决这个问题, 我们提议对自上而下的 Contraviced Contrastial Liverarning( SCALE) 进行早期知识( SCALE ), 以提取和回忆在现场获取的知识。 SCALE 设计围绕三大主要组成部分: 假监督的对比损失, 自我监督的忘记损失, 以及统一子集选择的在线记忆更新。 所有三个组成部分的设计都是协作, 以最大限度地提高学习绩效为目的。 我们的损失功能匹配了类似的功能, 从而消除对监督或先前知识的依赖性知识。 我们进行了全面测试, 5 将SAL- 的精确的准确性 LARC- 的 R18 和4 的 RLA- sq- droformal 进行所有 和在 I 的 RLI- sal- dro droformal 的 Ral 的 的 的 和 的 的 的 的 的 的 的 的 。