We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space. Deep generative models suffer from catastrophic forgetting in the same way as other neural structures. Recent generative continual learning works approach this problem and try to learn from new data without forgetting previous knowledge. However, those methods usually focus on artificial scenarios where examples share almost no similarity between subsequent portions of data - an assumption not realistic in the real-life applications of continual learning. In this work, we identify this limitation and posit the goal of generative continual learning as a knowledge accumulation task. We solve it by continuously aligning latent representations of new data that we call bands in additional latent space where examples are encoded independently of their source task. In addition, we introduce a method for controlled forgetting of past data that simplifies this process. On top of the standard continual learning benchmarks, we propose a novel challenging knowledge consolidation scenario and show that the proposed approach outperforms state-of-the-art by up to twofold across all experiments and the additional real-life evaluation. To our knowledge, Multiband VAE is the first method to show forward and backward knowledge transfer in generative continual learning.
翻译:我们提出一种新的方法,通过调整变异自动编码器的潜伏空间进行不受监督的基因持续学习。深基因模型与其他神经结构一样,遭受灾难性的遗忘。最近的基因持续学习工程处理这一问题,并试图从新数据中学习,而不会忘记先前的知识。然而,这些方法通常侧重于人造假设,其中的例子在随后的数据部分之间几乎没有相似之处,这种假设在不断学习的实际应用中是不现实的。在这项工作中,我们确定这一限制,并将基因持续学习的目标作为知识积累的任务。我们通过不断调整新数据的潜在表示方式解决这个问题。我们称之为新数据的潜在表示方式,我们称之为额外潜伏空间的频带,其中的示例是与其来源任务无关的编码。此外,我们引入了一种控制忘记过去数据的方法,从而简化了这一过程。在标准的不断学习基准中,我们提出了一个具有挑战性的知识整合设想方案,并表明拟议的方法在所有实验和额外的实际生活评估中超越了最新状态,再加倍。对于我们的知识来说,多波段VE是显示前向和后向后转移基因的第一个方法。