Continual learning aims to learn a sequence of tasks by leveraging the knowledge acquired in the past in an online-learning manner while being able to perform well on all previous tasks, this ability is crucial to the artificial intelligence (AI) system, hence continual learning is more suitable for most real-word and complex applicative scenarios compared to the traditional learning pattern. However, the current models usually learn a generic representation base on the class label on each task and an effective strategy is selected to avoid catastrophic forgetting. We postulate that selecting the related and useful parts only from the knowledge obtained to perform each task is more effective than utilizing the whole knowledge. Based on this fact, in this paper we propose a new framework, named Selecting Related Knowledge for Online Continual Learning (SRKOCL), which incorporates an additional efficient channel attention mechanism to pick the particular related knowledge for every task. Our model also combines experience replay and knowledge distillation to circumvent the catastrophic forgetting. Finally, extensive experiments are conducted on different benchmarks and the competitive experimental results demonstrate that our proposed SRKOCL is a promised approach against the state-of-the-art.
翻译:持续学习的目的是,通过利用过去获得的知识,以在线学习的方式学习一系列任务,同时能够很好地完成以前的所有任务,而这种能力对于人工智能系统至关重要,因此,与传统学习模式相比,持续学习更适合大多数真实和复杂的应用情景;然而,目前的模型通常在每件任务上学习一个类标签上的通用代表基础,并选择一个有效的战略,以避免灾难性的遗忘;我们假设,只有从获得的知识中选择相关和有用的部分才能完成每一项任务,要比利用整个知识更为有效。基于这一事实,我们在本文件中提议了一个新框架,名为“选择相关知识用于在线持续学习”(SRKOL),其中增加了一个高效的渠道关注机制,为每项任务选择特定相关知识。我们的模型还结合了经验再玩和知识蒸馏,以规避灾难性的遗忘。最后,在不同的基准和竞争性实验结果上进行了广泛的实验,表明我们提议的SRKOL是一种承诺的反对状态的方法。