The innate capacity of humans and other animals to learn a diverse, and often interfering, range of knowledge and skills throughout their lifespan is a hallmark of natural intelligence, with obvious evolutionary motivations. In parallel, the ability of artificial neural networks (ANNs) to learn across a range of tasks and domains, combining and re-using learned representations where required, is a clear goal of artificial intelligence. This capacity, widely described as continual learning, has become a prolific subfield of research in machine learning. Despite the numerous successes of deep learning in recent years, across domains ranging from image recognition to machine translation, such continual task learning has proved challenging. Neural networks trained on multiple tasks in sequence with stochastic gradient descent often suffer from representational interference, whereby the learned weights for a given task effectively overwrite those of previous tasks in a process termed catastrophic forgetting. This represents a major impediment to the development of more generalised artificial learning systems, capable of accumulating knowledge over time and task space, in a manner analogous to humans. A repository of selected papers and implementations accompanying this review can be found at https://github.com/mccaffary/continual-learning.
翻译:人类和其他动物在其整个生命期内在能力中学习多种多样而且往往是干扰性的知识和技能,这是自然智慧的标志,具有明显的进化动机;与此同时,人工神经网络在一系列任务和领域学习的能力,必要时结合和重新使用所学的表述,是人工智能的一个明确目标;这种能力被广泛描述为持续学习,已成为机器学习研究的一个巨大次级领域;尽管近年来在从图像识别到机器翻译等各个领域的深层学习取得了许多成功,但这种持续的任务学习证明具有挑战性;在以随机梯度梯度下降顺序进行多重任务培训的神经网络往往受到代表性干扰,因此,在被称为灾难性遗忘的进程中,对某项特定任务所学到的份量有效地超过以往任务。这严重阻碍了更普遍化的人工学习系统的发展,能够以类似人类的方式在时间和任务空间积累知识。