This work proposes a new method to sequentially train deep neural networks on multiple tasks without suffering catastrophic forgetting, while endowing it with the capability to quickly adapt to unseen tasks. Starting from existing work on network masking (Wortsman et al., 2020), we show that simply learning a linear combination of a small number of task-specific supermasks (impressions) on a randomly initialized backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on unseen tasks. In contrast to previous methods, we do not require to generate dedicated masks or contexts for each new task, instead leveraging transfer learning to keep per-task parameter overhead small. Our work illustrates the power of linearly combining individual impressions, each of which fares poorly in isolation, to achieve performance comparable to a dedicated mask. Moreover, even repeated impressions from the same task (homogeneous masks), when combined, can approach the performance of heterogeneous combinations if sufficiently many impressions are used. Our approach scales more efficiently than existing methods, often requiring orders of magnitude fewer parameters and can function without modification even when task identity is missing. In addition, in the setting where task labels are not given at inference, our algorithm gives an often favorable alternative to the one-shot procedure used by Wortsman et al., 2020. We evaluate our method on a number of well-known image classification datasets and network architectures.
翻译:这项工作提出一种新的方法,在不造成灾难性的遗忘的情况下,连续地培训关于多重任务的深神经网络,同时赋予它快速适应不可见任务的能力。从网络掩码的现有工作(Wortsman等人,2020年)开始,我们表明,只要在随机初始的骨干网络上学习少量任务特有超宏(压实)的线性组合(压实),就足以既保持以往所学到任务的准确性,又实现对不可见任务的高度准确性。与以往的方法不同,我们不需要为每一项新任务生成专门的遮罩或环境,而是利用转移学习来保持每个任务参数的小型管理。我们的工作展示了将个人印象进行线性组合的力量,每个印象的隔离性都非常差,能够达到与专用掩码相似的性功能。此外,如果使用足够多的印象,即使同一任务(混合面罩)的重复性印象,也可以接近混合组合的性表现。我们的方法比现有方法效率更高,常常需要数量更少的参数和功能,即使任务标识值不改变,即使任务标识值不小。此外,我们的工作展示了一种结构,我们通常不会被使用。