We explore task-free continual learning (CL), in which a model is trained to avoid catastrophic forgetting in the absence of explicit task boundaries or identities. Among many efforts on task-free CL, a notable family of approaches are memory-based that store and replay a subset of training examples. However, the utility of stored seen examples may diminish over time since CL models are continually updated. Here, we propose Gradient based Memory EDiting (GMED), a framework for editing stored examples in continuous input space via gradient updates, in order to create more "challenging" examples for replay. GMED-edited examples remain similar to their unedited forms, but can yield increased loss in the upcoming model updates, thereby making the future replays more effective in overcoming catastrophic forgetting. By construction, GMED can be seamlessly applied in conjunction with other memory-based CL algorithms to bring further improvement. Experiments validate the effectiveness of GMED, and our best method significantly outperforms baselines and previous state-of-the-art on five out of six datasets. Code can be found at https://github.com/INK-USC/GMED.
翻译:我们探索的是无任务持续学习(CL),在这种学习中,一个模型经过培训,以避免在没有明确任务界限或身份的情况下灾难性的忘记;在很多关于无任务CL的努力中,一个值得注意的方法系列是以记忆为基础的,储存和重放一组培训实例;然而,自CL模型不断更新以来,存储的已看到实例的效用可能会随着时间的减少而减少;在这里,我们提议了一个基于渐进的记忆持续学习(GMED)框架,用于通过梯度更新来编辑连续输入空间中储存的范例,以便创造更多的“挑战”实例,重新播放。GMED经编辑的例子与未经编辑的样板仍然相似,但可以在即将推出的样板更新中产生更多的损失,从而使得今后的重播能够更有效地克服灾难性的遗忘。通过构建,GMED可以与其他基于记忆的C算法一起顺利应用,以进一步改进。实验验证了GMED的有效性,我们的最佳方法大大超出6个数据集的基线和先前的状态。可以在 https://githhubub.com/INKUSG/MMED中找到代码。