In this paper, we propose a continual learning (CL) technique that is beneficial to sequential task learners by improving their retained accuracy and reducing catastrophic forgetting. The principal target of our approach is the automatic extraction of modular parts of the neural network and then estimating the relatedness between the tasks given these modular components. This technique is applicable to different families of CL methods such as regularization-based (e.g., the Elastic Weight Consolidation) or the rehearsal-based (e.g., the Gradient Episodic Memory) approaches where episodic memory is needed. Empirical results demonstrate remarkable performance gain (in terms of robustness to forgetting) for methods such as EWC and GEM based on our technique, especially when the memory budget is very limited.
翻译:在本文中,我们建议一种持续学习(CL)技术,通过提高保留准确性,减少灾难性的遗忘,有利于连续任务学习者。我们的方法的主要目标是自动提取神经网络的模块部分,然后估计这些模块组成部分的任务之间的关联性。这一技术适用于基于正规化(例如,Elastic Weight 集成)或基于彩排(例如,Gradient Episodic Memory)的方法的不同家庭,这些方法需要附带记忆(例如,Gradient Episodic Memory)的方法。经验性结果显示,基于我们技术的EWC和GEM等方法取得了显著的绩效收益(从强于遗忘的角度来说 ), 特别是在记忆预算非常有限的情况下。