This work proposes a new method to sequentially train a deep neural network on multiple tasks without suffering catastrophic forgetting, while endowing it with the capability to quickly adapt to unseen tasks. Starting from existing work on network masking (Wortsman et al., 2020), we show that simply learning a linear combination of a small number of task-specific masks (impressions) on a randomly initialized backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on new tasks. In contrast to previous methods, we do not require to generate dedicated masks or contexts for each new task, instead leveraging transfer learning to keep per-task parameter overhead small. Our work illustrates the power of linearly combining individual impressions, each of which fares poorly in isolation, to achieve performance comparable to a dedicated mask. Moreover, even repeated impressions from the same task (homogeneous masks), when combined can approach the performance of heterogeneous combinations if sufficiently many impressions are used. Our approach scales more efficiently than existing methods, often requiring orders of magnitude fewer parameters and can function without modification even when task identity is missing. In addition, in the setting where task labels are not given at inference, our algorithm gives an often favorable alternative to the entropy based task-inference methods proposed in (Wortsman et al., 2020). We evaluate our method on a number of well known image classification data sets and architectures.
翻译:这项工作提出一种新的方法,在不造成灾难性的遗忘的情况下,对多个任务进行深层神经网络的连续培训,同时赋予它快速适应不可见任务的能力。从网络掩码的现有工作(Wortsman等人,2020年)开始,我们显示,只要在随机初始的骨干网络上学习少量任务特定遮罩(压缩)的线性组合,即可在随机初始化的骨干网络上保留以往所学到的任务的准确性,并实现新任务的高度准确性。与以往的方法不同,我们不需要为每一项新任务生成专门的遮罩或环境,而是利用转移学习来保持每个任务参数的小型管理。我们的工作展示了将个人印象进行线性组合的力量,每个印象的隔离性差远,以取得与专用掩码相类似的业绩。此外,如果使用足够多的印象,即使同时能够处理混合组合的性表现,我们的方法比现有方法效率更高,常常需要数量更少的参数,而且即使在任务标识缺失时也可以不修改。此外,我们的工作展示了个人印象的线性组合方法,在2020年的模型中往往会评估一个已知的分类方法。