When a computational system continuously learns from an ever-changing environment, it rapidly forgets its past experiences. This phenomenon is called catastrophic forgetting. While a line of studies has been proposed with respect to avoiding catastrophic forgetting, most of the methods are based on intuitive insights into the phenomenon, and their performances have been evaluated by numerical experiments using benchmark datasets. Therefore, in this study, we provide the theoretical framework for analyzing catastrophic forgetting by using teacher-student learning. Teacher-student learning is a framework in which we introduce two neural networks: one neural network is a target function in supervised learning, and the other is a learning neural network. To analyze continual learning in the teacher-student framework, we introduce the similarity of the input distribution and the input-output relationship of the target functions as the similarity of tasks. In this theoretical framework, we also provide a qualitative understanding of how a single-layer linear learning neural network forgets tasks. Based on the analysis, we find that the network can avoid catastrophic forgetting when the similarity among input distributions is small and that of the input-output relationship of the target functions is large. The analysis also suggests that a system often exhibits a characteristic phenomenon called overshoot, which means that even if the learning network has once undergone catastrophic forgetting, it is possible that the network may perform reasonably well after further learning of the current task.
翻译:当计算系统从不断变化的环境中不断学习时,它会迅速忘记过去的经验。这种现象被称为灾难性的遗忘。虽然已经提出了一系列研究,以避免灾难性的遗忘,但大多数方法都基于对这一现象的直觉洞察力,其性能是通过使用基准数据集进行数字实验来评估的。因此,在这个研究中,我们提供了理论框架,通过使用师生学习来分析灾难性的遗忘。师生学习是一个框架,我们在这个框架中引入两个神经网络:一个神经网络是监督学习的一个目标功能,另一个是学习神经网络。为了分析师生框架的持续学习,我们介绍了投入分布的相似性和目标功能的输入-输出关系,与任务相似。在这个理论框架内,我们还提供了对使用师生学习的单层线性学习神经网络如何忘记任务的定性理解。基于进一步的分析,我们发现当输入分布相似时,这个网络可以避免灾难性的遗忘,而另一个则是学习神经网络的网络网络网络网络。为了分析结果,一旦顺利地学习了网络目标的特征,我们就会提出一个巨大的可能的方法。