Continual learning is a learning paradigm that learns tasks sequentially with resources constraints, in which the key challenge is stability-plasticity dilemma, i.e., it is uneasy to simultaneously have the stability to prevent catastrophic forgetting of old tasks and the plasticity to learn new tasks well. In this paper, we propose a new continual learning approach, Advanced Null Space (AdNS), to balance the stability and plasticity without storing any old data of previous tasks. Specifically, to obtain better stability, AdNS makes use of low-rank approximation to obtain a novel null space and projects the gradient onto the null space to prevent the interference on the past tasks. To control the generation of the null space, we introduce a non-uniform constraint strength to further reduce forgetting. Furthermore, we present a simple but effective method, intra-task distillation, to improve the performance of the current task. Finally, we theoretically find that null space plays a key role in plasticity and stability, respectively. Experimental results show that the proposed method can achieve better performance compared to state-of-the-art continual learning approaches.
翻译:持续学习是一种学习模式,它以资源限制为依次学习任务,在这种模式中,关键的挑战就是稳定-塑料难题,也就是说,如果同时具有稳定性,以防止灾难性地忘记旧任务和可塑性,从而很好地学习新任务,这是不难理解的。在本文件中,我们提出了一种新的持续学习方法,即高级Null Space(AdNS),以平衡稳定性和可塑性,同时又不储存以往任务的任何旧数据。具体地说,为了获得更好的稳定,AdNS利用低级近距离获得一个新的空格空间,并将梯度投向空域,以防止对过去任务的干扰。为了控制空域的产生,我们引入了非统一制约力,以进一步减少遗忘。此外,我们提出了一个简单而有效的方法,即任务内部蒸馏,以改进当前任务的绩效。最后,我们理论上认为,空域在塑料和稳定方面分别发挥着关键作用。实验结果表明,拟议的方法可以比状态持续学习方法取得更好的业绩。