We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task-specific parameters act as local adapters for each task, and help in efficiently obtaining task-specific feature maps. Moreover, we propose an element-wise addition of residual bias in the transformed feature space, which further helps stabilize GAN training in such settings. Our approach also leverages task similarity information based on the Fisher information matrix. Leveraging this knowledge from previous tasks significantly improves the model performance. In addition, the similarity measure also helps reduce the parameter growth in continual adaptation and helps to learn a compact model. In contrast to the recent approaches for continually-learned GANs, the proposed approach provides a memory-efficient way to perform effective continual data generation. Through extensive experiments on challenging and diverse datasets, we show that the feature-map-transformation approach outperforms state-of-the-art methods for continually-learned GANs, with substantially fewer parameters. The proposed method generates high-quality samples that can also improve the generative-replay-based continual learning for discriminative tasks.
翻译:我们通过设计和利用具有参数效率特征的地图转换,为基因化对抗网络(GANs)提供了一种持续的学习方法,通过设计和利用具有参数效率的特征地图转换,我们的方法基于学习一套全球性和特定任务参数。全球参数是按任务确定的,而具体任务参数是每个任务的地方适应者,有助于有效地获得具体任务特征地图。此外,我们提议在改变的地貌空间中增加残余偏差,从元素角度出发,进一步帮助稳定在这种环境下的GAN培训。我们的方法还利用基于渔业信息矩阵的类似任务信息。利用以往任务中这种知识大大改进了模型性能。此外,类似性计量还有助于减少持续适应的参数增长,帮助学习一个紧凑模型。与最近为不断学习的GANs提供的方法不同,拟议的方法为有效持续生成数据提供了一种记忆效率高的方法。通过对具有挑战性和多样性的数据集进行广泛的实验,我们表明,地貌变换方法将基于远小参数的GANs(持续学习的GANs)的状态-艺术方法外形形,也大大改进了以基因变异性参数为基础的基因学习任务。