Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting. Empirical results on several benchmark datasets show that our model consistently outperforms recently proposed baselines for task-free continual learning.
翻译:尽管取得了重大进步,但持续学习模式在接触非静止分布的逐步可用数据时仍会遭受灾难性的遗忘。 排练方法通过维持和重放以往样本的小型偶发记忆来缓解这一问题,这些记忆往往是作为一系列独立的记忆空档来实施的。 在这项工作中,我们提议增加这样一个阵列,用一个可学习的随机图表来捕捉其样本之间的对等相似之处,并不仅用来学习新任务,而且用来防止遗忘。 几个基准数据集的经验性结果显示,我们的模型一直比最近提议的无任务持续学习基线要好。