Continual learning aims to provide intelligent agents that are capable of learning continually a sequence of tasks, building on previously learned knowledge. A key challenge in this learning paradigm is catastrophically forgetting previously learned tasks when the agent faces a new one. Current rehearsal-based methods show their success in mitigating the catastrophic forgetting problem by replaying samples from previous tasks during learning a new one. However, these methods are infeasible when the data of previous tasks is not accessible. In this work, we propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL), in which class-invariant representation is disentangled from a conditional generative model and jointly used with class-specific representation to learn the sequential tasks. Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer. We focus on class incremental learning where there is no knowledge about task identity during inference. We empirically evaluate our proposed method on two well-known benchmarks for continual learning: split MNIST and split Fashion MNIST. The experimental results show that our proposed method outperforms regularization-based methods by a big margin and is better than the state-of-the-art pseudo-rehearsal-based method. Finally, we analyze the role of the shared invariant representation in mitigating the forgetting problem especially when the number of replayed samples for each previous task is small.
翻译:持续学习的目的是提供能够持续学习一系列任务的智能剂,以先前学到的知识为基础,不断学习一系列任务。这一学习模式中的一个关键挑战是,当该代理人面临新的任务时,灾难性地忘记以前学到的任务。当前以排练为基础的方法显示,在学习新任务时,通过重放先前任务样本,成功减轻灾难性的忘记问题。然而,当无法获取先前任务的数据时,这些方法是不可行的。在这项工作中,我们提议一种新的假排练法,称为学习不轨的不断学习(IRCL),在这种模式中,班级差异性差异代表与有条件的基因模型脱钩,并与特定类别的代表共同使用,学习相继的任务。分散的互换方法有助于不断学习一系列任务,同时更有力地遗忘和更好地传授知识。我们注重在判断中不了解任务特性的班级递增学习。我们从经验上评价了我们提出的两个众所周知的继续学习基准:在以往的减税工作中,以分裂的MNIST和分裂的Fashinal 代表方式,即我们先前的减税前期的减税前期的变换方法,是我们提出的一个更好的方法。我们提出的方法的变换方法,在前的变式的变式的变式方法是改进的变式的变式的变式的变式的变式的变式方法,在前的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式方法是更最后的变式方法,在我们提议的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式方法是我们提议的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式方法,是的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式方法是的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变式的变