Continual learning requires the model to maintain the learned knowledge while learning from a non-i.i.d data stream continually. Due to the single-pass training setting, online continual learning is very challenging, but it is closer to the real-world scenarios where quick adaptation to new data is appealing. In this paper, we focus on online class-incremental learning setting in which new classes emerge over time. Almost all existing methods are replay-based with a softmax classifier. However, the inherent logits bias problem in the softmax classifier is a main cause of catastrophic forgetting while existing solutions are not applicable for online settings. To bypass this problem, we abandon the softmax classifier and propose a novel generative framework based on the feature space. In our framework, a generative classifier which utilizes replay memory is used for inference, and the training objective is a pair-based metric learning loss which is proven theoretically to optimize the feature space in a generative way. In order to improve the ability to learn new data, we further propose a hybrid of generative and discriminative loss to train the model. Extensive experiments on several benchmarks, including newly introduced task-free datasets, show that our method beats a series of state-of-the-art replay-based methods with discriminative classifiers, and reduces catastrophic forgetting consistently with a remarkable margin.
翻译:持续学习要求模型在从非i.i.d数据流中不断学习的同时保持所学知识。 由于单向培训设置, 在线持续学习非常富有挑战性, 但更接近于现实世界情景, 快速适应新数据的吸引力。 在本文中, 我们侧重于在线课堂强化学习环境, 从而随着时间推移产生新类。 几乎所有现有方法都以软式马克思分类器重现。 然而, 软式马克思分类器中固有的登录偏差问题是灾难性遗忘的主要原因, 而现有解决方案不适用于在线环境。 为了绕过这个问题, 我们放弃了软式麦克斯分类器, 并提出了基于功能空间的新型基因化框架。 在我们的框架中, 一个使用重现记忆的基因化分类器被用于推理, 培训目标是一种双对式的标准学习损失, 从理论上证明, 以最优化特征空间。 为了提高学习新数据的能力, 我们进一步建议将基因化和歧视性损失混合起来, 来训练模型。 为了绕过这个问题, 我们放弃软式的分类, 我们放弃软式分类, 我们提出一个基于功能空间的新型标准, 大规模实验, 展示了一种不折叠式的统计方法, 以新的标准, 并展示了一种不折叠式的方法, 以新任务级化方法, 以新的标准 展示了一种新的标准, 展示了一种新的标准, 降低了一种不重定式的级化方法, 以新的标准性的方法, 降低了一种标准, 降低了一种标准,, 以新的标准, 降低了我们式的级化方法, 降低了一种不重制。