With the rapid advances in generative adversarial networks (GANs), the visual quality of synthesised scenes keeps improving, including for complex urban scenes with applications to automated driving. We address in this work a continual scene generation setup in which GANs are trained on a stream of distinct domains; ideally, the learned models should eventually be able to generate new scenes in all seen domains. This setup reflects the real-life scenario where data are continuously acquired in different places at different times. In such a continual setup, we aim for learning with zero forgetting, \IE, with no degradation in synthesis quality over earlier domains due to catastrophic forgetting. To this end, we introduce a novel framework that not only (i) enables seamless knowledge transfer in continual training but also (ii) guarantees zero forgetting with a small overhead cost. While being more memory efficient, thanks to continual learning, our model obtains better synthesis quality as compared against the brute-force solution that trains one full model for each domain. Especially, under extreme low-data regimes, our approach outperforms the brute-force one by a large margin.
翻译:随着基因对抗网络(GANs)的快速进步,合成场景的视觉质量不断提高,包括具有自动驱动应用程序的复杂城市场景。我们在此工作中处理一个连续的场景生成装置,在这种装置中,GANs在一系列不同的领域接受培训;理想的是,学习的模型最终能够在所有可见的领域产生新的场景。这种设置反映了在不同时间在不同地点持续获得数据的真实生活情景。在这样的连续设置中,我们的目标是以零忘记(\IEE)的方式学习,而由于灾难性的遗忘,合成质量不会下降。为此,我们引入了一个新的框架,不仅(i)在持续培训中能够实现知识的无缝转让,而且(ii)保证以少量间接费用来保证零忘记。虽然由于不断学习,我们的模型在记忆方面效率更高,但与为每个领域培养一个完整模型的布鲁特力解决方案相比,我们的模型获得了更好的综合质量。在极低数据制度下,我们的方法比布鲁特力强得多。