The exemplar-free class incremental learning requires classification models to learn new class knowledge incrementally without retaining any old samples. Recently, the framework based on parallel one-class classifiers (POC), which trains a one-class classifier (OCC) independently for each category, has attracted extensive attention, since it can naturally avoid catastrophic forgetting. POC, however, suffers from weak discriminability and comparability due to its independent training strategy for different OOCs. To meet this challenge, we propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL). DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD), because a trained VAE can not only identify the probability of an input sample belonging to a class but also generate pseudo samples of the class to assist in learning new tasks. With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the comparability. Furthermore, DisCOIL introduces a hinge reconstruction loss to ensure the discriminability. We evaluate our method extensively on MNIST, CIFAR10, and Tiny-ImageNet. The experimental results show that DisCOIL achieves state-of-the-art performance.
翻译:无世袭阶级增量学习要求分类模型在不保留任何旧样本的情况下逐步学习新的阶级知识。 最近,基于平行的单级分类师(POC)的框架(POC)(POC)独立地为每个类别培训一个单级分类师(OCC),这一框架已经引起了广泛的关注,因为它自然可以避免灾难性的忘记。 POC由于对不同OOC的独立培训战略,其差异性和可比性较低。为了应对这一挑战,我们提议了一个名为差异性和可比性的新的框架,名为“升级学习”的单级分类师(DisCOI)。 DisCOIL遵循了POC的基本原则,但采用变式自动分类师(VAE),而不是其他成熟的一级分类师(如深层SVDDD),因为经过培训的VAE不仅能确定属于某个类的投入样本的概率,而且还生成了该类的假样本,以协助学习新的任务。有了这一优势,DisCOIL与旧级的VAE相比,它采用变式自动读器,从而改进了旧级的IMA-IL的升级方法。