In this paper, we propose a new method to overcome catastrophic forgetting by adding generative regularization to Bayesian inference framework. Bayesian method provides a general framework for continual learning. We could further construct a generative regularization term for all given classification models by leveraging energy-based models and Langevin-dynamic sampling to enrich the features learned in each task. By combining discriminative and generative loss together, we empirically show that the proposed method outperforms state-of-the-art methods on a variety of tasks, avoiding catastrophic forgetting in continual learning. In particular, the proposed method outperforms baseline methods over 15% on the Fashion-MNIST dataset and 10% on the CUB dataset
翻译:在本文中,我们提出一种新的方法来克服灾难性的遗忘,方法是在贝叶西亚推论框架中增加基因正规化。巴伊西亚方法为持续学习提供了一个总体框架。我们可以进一步为所有特定分类模式构建一个基因正规化术语,办法是利用以能源为基础的模型和Langevin动力抽样来丰富在每项任务中所学到的特征。通过将歧视性损失和基因损失结合起来,我们从经验上表明,拟议方法在各种任务上优于最先进的方法,避免在持续学习中灾难性地遗忘。特别是,拟议方法在时尚-MNIST数据集中超过了15%以上的基线方法,在CUB数据集中超过了10%。