Humans can learn a variety of concepts and skills incrementally over the course of their lives while exhibiting many desirable properties, such as continual learning without forgetting, forward transfer and backward transfer of knowledge, and learning a new concept or task with only a few examples. Several lines of machine learning research, such as lifelong machine learning, few-shot learning, and transfer learning attempt to capture these properties. However, most previous approaches can only demonstrate subsets of these properties, often by different complex mechanisms. In this work, we propose a simple yet powerful unified deep learning framework that supports almost all of these properties and approaches through one central mechanism. Experiments on toy examples support our claims. We also draw connections between many peculiarities of human learning (such as memory loss and "rain man") and our framework. As academics, we often lack resources required to build and train, deep neural networks with billions of parameters on hundreds of TPUs. Thus, while our framework is still conceptual, and our experiment results are surely not SOTA, we hope that this unified lifelong learning framework inspires new work towards large-scale experiments and understanding human learning in general. This paper is summarized in two short YouTube videos: https://youtu.be/gCuUyGETbTU (part 1) and https://youtu.be/XsaGI01b-1o (part 2).
翻译:人类可以在一生中逐步学习各种概念和技能,同时展示许多可取的特性,例如不断学习而不忘、前向转移和后向转移知识,以及学习新概念或任务,仅举几个例子。几条机器学习研究线,例如终身机器学习、短视学习和转移学习尝试,以捕捉这些属性。然而,大多数以往的方法只能展示这些属性的子集,往往通过不同的复杂机制。在这项工作中,我们提出了一个简单而有力的统一深层次学习框架,通过一个中央机制支持几乎所有这些属性和办法。关于玩具例子的实验支持我们的要求。我们还在人类学习的许多特殊性(如记忆丧失和“人才”等)和我们的框架之间绘制了联系。作为学术界,我们常常缺乏必要的资源来建立和培训具有数百个TIP的数十亿参数的深层神经网络。因此,虽然我们的框架仍然是概念性,我们的实验结果肯定不是SOTA,但我们希望这个统一的终身学习框架能激发新的工作,以进行大规模实验和了解一般的人类学习。本文在两个短短的YouTube/XBB/TUB上作了总结(http://TUB/TUB/TU)。