Continual learning is known for suffering from catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we challenge the assumption that continual learning is inevitably associated with catastrophic forgetting by presenting a set of tasks that surprisingly do not suffer from catastrophic forgetting when learned continually. The robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. We further introduce a novel yet simple algorithm, YASS that achieves state-of-the-art performance in the class-incremental categorization learning task and provide an insight into the benefit of learning the representation continuously. Finally, we present converging evidence on the forgetting dynamics of representation learning in continual models. The codebase, dataset, and pre-trained models released with this article can be found at https://github.com/rehg-lab/CLRec.
翻译:持续学习以遭受灾难性的遗忘而闻名于世,这是一种以牺牲最新样本为代价而忘记早期学到的概念的现象。在这项工作中,我们质疑这样的假设,即继续学习不可避免地与灾难性的遗忘相关联,方法是提出一系列任务,令人惊讶的是,当不断学习时不会遭受灾难性的遗忘。这些任务的稳健性使得有可能有一个代用代表学习任务,用于持续分类。我们进一步引入了一种新颖而简单的算法,YASS,在课堂升级的分类学习任务中实现最先进的表现,并使人们深入了解不断学习代表性的好处。最后,我们提出了在连续模型中遗忘代表性学习动态的证据。与这一文章一起发布的代码库、数据集和预先培训的模式可在https://github.com/rehg-lab/CLRec找到。