Continual learning, a promising future learning strategy, aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever new task arrives. However, existing approaches are designed in supervised fashion assuming all data from new tasks have been annotated, which are not practical for many real-life applications. In this work, we introduce a new framework to make continual learning feasible in unsupervised mode by using pseudo label obtained from cluster assignments to update model. We focus on image classification task under class-incremental setting and assume no class label is provided for training in each incremental learning step. For illustration purpose, we apply k-means clustering, knowledge distillation loss and exemplar set as our baseline solution, which achieves competitive results even compared with supervised approaches on both challenging CIFAR-100 and ImageNet (ILSVRC) datasets. We also demonstrate that the performance of our baseline solution can be further improved by incorporating recently developed supervised continual learning techniques, showing great potential for our framework to minimize the gap between supervised and unsupervised continual learning.
翻译:持续学习是一项有希望的未来学习战略,目的是在新任务到来时,利用较少的计算和记忆资源,而不是从零开始对模式进行再培训,逐步学习新任务;然而,现有方法的设计是以监督方式设计的,假设所有新任务的数据都有注释说明,这对许多现实生活中的应用是不切实际的;在这项工作中,我们引入了一个新的框架,通过使用从集群任务中获得的假标签更新模型,使持续学习在不受监督的模式中成为可行;我们侧重于在类级增长设置下的图像分类任务,并假定在每一个递增学习步骤中没有为培训提供阶级标签;为说明起见,我们采用k- means集群、知识蒸馏损失和作为我们基线解决方案的缩写方法,即使与在挑战CIFAR-100和图像网络(ILSVRC)数据集方面的监督方法相比,取得竞争性的结果;我们还表明,通过纳入最近开发的受监督的持续学习技术,我们的基线解决方案的绩效还可以进一步改进,为我们的框架提供巨大的潜力,以最大限度缩小受监督的不断学习之间的差距。