Continual Learning is considered a key step toward next-generation Artificial Intelligence. Among various methods, replay-based approaches that maintain and replay a small episodic memory of previous samples are one of the most successful strategies against catastrophic forgetting. However, since forgetting is inevitable given bounded memory and unbounded tasks, how to forget is a problem continual learning must address. Therefore, beyond simply avoiding catastrophic forgetting, an under-explored issue is how to reasonably forget while ensuring the merits of human memory, including 1. storage efficiency, 2. generalizability, and 3. some interpretability. To achieve these simultaneously, our paper proposes a new saliency-augmented memory completion framework for continual learning, inspired by recent discoveries in memory completion separation in cognitive neuroscience. Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding. When learning new tasks, previous data from memory are inpainted by an adaptive data generation module, which is inspired by how humans complete episodic memory. The module's parameters are shared across all tasks and it can be jointly trained with a continual learning classifier as bilevel optimization. Extensive experiments on several continual learning and image classification benchmarks demonstrate the proposed method's effectiveness and efficiency.
翻译:持续学习被认为是下一代人工智能的关键一步。 在各种方法中,维持和重现以往样本的小型偶发记忆的重现方法是防止灾难性遗忘的最成功战略之一。然而,由于遗忘是不可避免的,因此,如何忘记是一个持续学习的问题。因此,除了简单地避免灾难性的遗忘之外,一个探索不足的问题是如何合理忘记人类记忆的优点,包括1. 储存效率、2. 普遍性和3. 某些解释性。为了同时实现这些,我们的文件提议了一个新的显著增强的记忆完成框架,用于持续学习,这是受最近认知神经科学中记忆完成分离发现启发的。具体地说,我们创新地提议,通过突出的地图提取和记忆编码,储存对记忆缩记中任务最重要的部分图像。当学习新任务时,以前的记忆数据会受到适应性数据生成模块的干扰,该模块的灵感来自人类完整缩写记忆的模块。模块的参数在所有任务中共享,并且能够通过连续的升级和连续的分类方法来展示连续的学习效率。