Unsupervised lifelong learning refers to the ability to learn over time while memorizing previous patterns without supervision. Previous works assumed strong prior knowledge about the incoming data (e.g., knowing the class boundaries) which can be impossible to obtain in complex and unpredictable environments. In this paper, motivated by real-world scenarios, we formally define the online unsupervised lifelong learning problem with class-incremental streaming data, which is non-iid and single-pass. The problem is more challenging than existing lifelong learning problems due to the absence of labels and prior knowledge. To address the issue, we propose Self-Supervised ContrAstive Lifelong LEarning (SCALE) which extracts and memorizes knowledge on-the-fly. SCALE is designed around three major components: a pseudo-supervised contrastive loss, a self-supervised forgetting loss, and an online memory update for uniform subset selection. All three components are designed to work collaboratively to maximize learning performance. Our loss functions leverage pairwise similarity thus remove the dependency on supervision or prior knowledge. We perform comprehensive experiments of SCALE under iid and four non-iid data streams. SCALE outperforms the best state-of-the-art algorithm on all settings with improvements of up to 3.83%, 2.77% and 5.86% kNN accuracy on CIFAR-10, CIFAR-100 and SubImageNet datasets.
翻译:不受监督的终身学习是指在不监督的情况下在背着先前模式的同时长期学习的能力。 先前的工作假设了在复杂和不可预测的环境中不可能获得的关于接收数据( 例如, 了解阶级边界) 的高级知识。 在本文中, 由现实世界情景的驱动, 我们正式定义了不受监督的在线终身学习问题, 使用非二次和单一路径的等级递增流数据。 由于缺乏标签和先前的知识, 这个问题比现有的终身学习问题更具挑战性。 为了解决这个问题, 我们提议对自上而下的 Contraviced Contrastial Liverarning( SCALE) 进行自我监督或前期知识的可靠知识( SCALE ) 。 SCALE 是围绕三个主要组成部分设计的: 假监督的对比损失, 自我监督的忘记损失, 以及 统一子集选择的在线记忆更新。 所有三个组成部分都旨在合作工作, 以最大限度地学习成绩。 我们的损失功能匹配了对监督或先前知识的依赖性知识。 我们用的是, 5 - RAR 的准确性 3, 和 RAS- RAS- 格式的所有 格式在 IM- 的 RAL- sq- sq- sq- droformal 和 四 中, 在 的 的 RLA- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- sal- dirmal- sal- droforviald.