Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
翻译:人类和动物在其整个生命周期中都有能力不断获得、微调和转让知识和技能。这种能力被称为终身学习,由一整套丰富的神经认知机制加以调节,这些机制共同有助于我们感官技能的开发和专业化以及长期记忆的巩固和检索。因此,终身学习能力对于在现实世界中互动的自主代理者和处理连续的信息流至关重要。然而,终身学习仍然是机器学习和神经网络模型的长期挑战,因为从非静止数据传播中不断获得越来越多的现有信息通常会导致灾难性的遗忘或干扰。这一限制是最新、深层神经网络模型的重大倒退,这些模型通常学习固定的培训数据,从而不考虑信息随着时间的推移逐渐获得的情况。在本次审查中,我们批判地总结了与终身学习人工学习系统有关的主要挑战,并比较现有的神经网络方法,这些方法在不同程度上减轻了灾难性的遗忘。我们讨论了由生物系统中的终身学习因素所驱动的既定和新出现的研究,如结构性、记忆学习动力、记忆学习课程和内在学习动力、记忆学习课程以及生物系统中的终身学习因素。