Continual learning is known for suffering from catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we challenge the assumption that continual learning is inevitably associated with catastrophic forgetting by presenting a set of tasks that surprisingly do not suffer from catastrophic forgetting when learned continually. We attempt to provide an insight into the property of these tasks that make them robust to catastrophic forgetting and the potential of having a proxy representation learning task for continual classification. We further introduce a novel yet simple algorithm, YASS that outperforms state-of-the-art methods in the class-incremental categorization learning task. Finally, we present DyRT, a novel tool for tracking the dynamics of representation learning in continual models. The codebase, dataset and pre-trained models released with this article can be found at https://github.com/ngailapdi/CLRec.
翻译:在这项工作中,我们质疑这样的假设,即继续学习不可避免地会与灾难性的遗忘联系在一起。 我们提出了一系列任务,令人惊讶的是,当不断学习时不会遭受灾难性的遗忘。 我们试图对这些任务的特点进行深入了解,这些任务使得它们对于灾难性的遗忘具有很强的活力,并有可能为持续分类而进行代理代表式学习任务。我们还引入了一种新颖而简单的算法,YASS,它比起等级分类学习任务中最先进的方法。最后,我们介绍了DyRT,这是跟踪持续模型中代表性学习动态的新工具。 与这篇文章一起发布的代码库、数据集和预先培训模型可在https://github.com/ngalapdi/CLRec找到。