We present an approach for continual learning (CL) that is based on fully probabilistic (or generative) models of machine learning. In contrast to, e.g., GANs that are "generative" in the sense that they can generate samples, fully probabilistic models aim at modeling the data distribution directly. Consequently, they provide functionalities that are highly relevant for continual learning, such as density estimation (outlier detection) and sample generation. As a concrete realization of generative continual learning, we propose Gaussian Mixture Replay (GMR). GMR is a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities. Relying on the MNIST, FashionMNIST and Devanagari benchmarks, we first demonstrate unsupervised task boundary detection by GMM density estimation, which we also use to reject untypical generated samples. In addition, we show that GMR is capable of class-conditional sampling in the way of a cGAN. Lastly, we verify that GMR, despite its simple structure, achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
翻译:我们提出了一种基于完全概率(或基因化)机器学习模型的持续学习方法,例如,与能够产生样品的“遗传性”GAN相比,完全概率模型旨在直接模拟数据分布,因此,这些模型提供了与持续学习高度相关的功能,如密度估计(外部检测)和样本生成。作为基因持续学习的具体实现,我们提议Gausian Mixture Replay(GMR),GMR是一种假复习方法,它使用高频混合模型(GMM)实例,用于发电机和分类功能。我们首先通过MNIST、FashonMMSIST和Devanagari基准展示了与持续学习高度相关的功能,例如密度估计(外部检测)和样本生成样本。此外,我们表明GMR能够以CAN的方式进行等级定级采样。最后,我们核实GMR尽管在时间结构上存在共同的复杂性学习,但仍能达到共同的状态。