Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data where each sample is seen only once. The main challenge is to incrementally learn while avoiding catastrophic forgetting, namely the problem of forgetting previously acquired knowledge while learning from new data. A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time. Unfortunately, due to the limited memory size, the quality of the memory will deteriorate over time. In this paper we propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size. The sample condensation step compresses old samples, instead of removing them like other replay strategies. As a result, the experiments show that, whenever the memory budget is limited compared to the complexity of the data, OLCGM improves the final accuracy compared to state-of-the-art replay strategies.
翻译:在线持续学习是一个具有挑战性的学习情景,模型必须从非静止的数据流中学习,每个样本只看到一次。主要的挑战是如何在避免灾难性的遗忘的同时逐步学习,即从新数据中学习时忘记以前获得的知识的问题。这种情景中流行的一种解决办法是使用小记忆来保留旧数据并在一段时间里进行排练。不幸的是,由于记忆体积有限,记忆质量将随着时间而恶化。在本文中,我们提议了基于重播的不断学习战略,即使用知识凝聚技术来不断压缩记忆力并更好地使用其有限尺寸。抽样凝聚步骤压缩旧样本,而不是像其他重放战略那样删除它们。结果实验表明,如果记忆预算相对于数据的复杂性有限,则会提高记忆预算的最终准确性。