Targeted for real world scenarios, online continual learning aims to learn new tasks from sequentially available data under the condition that each data is observed only once by the learner. Though recent works have made remarkable achievements by storing part of learned task data as exemplars for knowledge replay, the performance is greatly relied on the size of stored exemplars while the storage consumption is a significant constraint in continual learning. In addition, storing exemplars may not always be feasible for certain applications due to privacy concerns. In this work, we propose a novel exemplar-free method by leveraging nearest-class-mean (NCM) classifier where the class mean is estimated during training phase on all data seen so far through online mean update criteria. We focus on image classification task and conduct extensive experiments on benchmark datasets including CIFAR-100 and Food-1k. The results demonstrate that our method without using any exemplar outperforms state-of-the-art exemplar-based approaches with large margins under standard protocol (20 exemplars per class) and is able to achieve competitive performance even with larger exemplar size (100 exemplars per class).
翻译:针对现实世界的情景,在线持续学习旨在从按顺序提供的数据中学习新任务,条件是每个数据只由学习者观察一次。虽然最近的工作取得了显著成就,将部分已学习的任务数据储存成知识回放的示范,但业绩在很大程度上取决于存储的模拟器的大小,而存储消耗量是持续学习的一大制约因素。此外,由于隐私问题,储存示范器对某些应用可能并不总是可行。在这项工作中,我们提出一种新的无示范方法,即利用最接近等级的分类器(NCM),在培训阶段,通过在线平均更新标准对迄今为止所见的所有数据进行估计。我们侧重于图像分类任务,对基准数据集进行广泛的实验,包括CIFAR-100和Food-1k。结果显示,我们的方法不使用任何根据标准协议使用大型边际(每类20个示范器)的示范工具,不使用任何示范出局优异于目前状态的示范软件。我们的方法,甚至能够以更大的Exmplar规模(每类100个示范器)。