Continual Learning (CL, sometimes also termed incremental learning) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted. When naively applying, e.g., DNNs in CL problems, changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge. Although many significant contributions to enabling CL have been made in recent years, most works address supervised (classification) problems. This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning. Besides proposing a simple schema for classifying CL approaches w.r.t. their level of autonomy and supervision, we discuss the specific challenges associated with each setting and the potential contributions to the field of CL in general.
翻译:连续学习(CL,有时也称为递增学习)是一种机器学习的味道,通常的固定数据分配假设是放松或省略的。当天真地应用,例如,CL问题中的DNN, 数据分配的变化可能导致所谓的灾难性遗忘(CF)效应:突然丧失先前的知识。虽然近年来对CL作出了许多重大贡献,但大多数工作都解决了受监督(分类)的问题。本文章回顾了在其他环境中研究CL的文献,例如,在监督减少的情况下学习,完全不受监督的学习,以及强化学习。我们除了提出对CL方法的自主和监督程度进行分类的简单计划外,我们还讨论了与每种环境有关的具体挑战,以及一般对CL领域的潜在贡献。