Introducing a time dependency on the data generating distribution has proven to be difficult for gradient-based training of neural networks, as the greedy updates result in catastrophic forgetting of previous timesteps. Continual learning aims to overcome the greedy optimization to enable continuous accumulation of knowledge over time. The data stream is typically divided into locally stationary distributions, called tasks, allowing task-based evaluation on held-out data from the training tasks. Contemporary evaluation protocols and metrics in continual learning are task-based and quantify the trade-off between stability and plasticity only at task transitions. However, our empirical evidence suggests that between task transitions significant, temporary forgetting can occur, remaining unidentified in task-based evaluation. Therefore, we propose a framework for continual evaluation that establishes per-iteration evaluation and define a new set of metrics that enables identifying the worst-case performance of the learner over its lifetime. Performing continual evaluation, we empirically identify that replay suffers from a stability gap: upon learning a new task, there is a substantial but transient decrease in performance on past tasks. Further conceptual and empirical analysis suggests not only replay-based, but also regularization-based continual learning methods are prone to the stability gap.
翻译:事实证明,对神经网络进行基于梯度的基于数据分布培训很难引入时间依赖,因为贪婪的更新导致灾难性地忘记了以往的时间步骤。持续学习的目的是克服贪婪的优化,以便长期不断地积累知识。数据流通常分为当地固定分布,称为任务,允许对培训任务中搁置的数据进行基于任务的评价。当代评价规程和持续学习的衡量标准基于任务,并且将稳定和可塑性之间的权衡量化到任务过渡阶段。然而,我们的实证证据表明,任务过渡之间可能发生重大、暂时的忘却,在基于任务的评价中仍然未查明。因此,我们提议了一个持续评价框架,以建立按部就班评价并确定一套新的衡量标准,以便能够确定学习者一生中最坏的成绩。我们不断进行评价,从经验上确定,重新发挥作用会受到稳定差距的影响:在学习新任务后,过去任务的绩效会大幅下降,但变化不定。进一步的概念和经验分析表明,不仅基于重复,而且基于正规化的持续学习方法也容易保持稳定。