We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task specific parameters act as local adapters for each task, and help in efficiently transforming the previous task's feature map to the new task's feature map. Moreover, we propose an element-wise residual bias in the transformed feature space which highly stabilizes GAN training. In contrast to the recent approaches for continual GANs, we do not rely on memory replay, regularization towards previous tasks' parameters, or expensive weight transformations. Through extensive experiments on challenging and diverse datasets, we show that the feature-map transformation based approach outperforms state-of-the-art continual GANs methods, with substantially fewer parameters, and also generates high-quality samples that can be used in generative replay based continual learning of discriminative tasks.
翻译:我们通过设计和利用参数效率特征图转换,为基因对抗网络(GANs)提出了一个持续学习的方法。我们的方法基于学习一套全球和任务特定参数。全球参数是按任务确定的,而任务具体参数是每个任务的地方适应器,有助于有效地将上一个任务特征图转换为新任务特征图。此外,我们提议在变换的地貌空间中存在一个元素方面遗留的偏差,这种偏差高度稳定了GAN培训。与最近对连续的GANs采用的方法不同,我们并不依赖记忆回放、对以往任务参数的正规化或昂贵的重量转换。通过对具有挑战性和多样性的数据集的广泛试验,我们表明基于地貌图的转换方法超越了最新水平的连续GANs方法,参数要少得多,而且还产生高质量的样本,可用于基于基因的继续学习有区别性的任务。