Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes.
翻译:多类增量学习(MCIL)旨在通过逐步更新以前概念所培训的模型来学习新概念,从而学习新概念。然而,在有效学习新概念的同时,在不忘记以前概念的灾难性情况下,存在着一个内在的权衡。为了缓解这一问题,我们建议保留以前概念的几个例子,但这一方法的有效性在很大程度上取决于这些例子的代表性。本文提议了一个新颖和自动的框架,我们称之为模版,我们在这个框架中参数化出像器,并使其以端到端的方式最优化。我们通过双级优化,即模型级和示范级来培训这个框架。我们在三个MIL基准(CIFAR-100、图像网络-子集和图像网络)上进行了广泛的实验,并表明使用纳米成像成像器可以大大超越目前的状况。有趣的是,模版的模版往往在不同等级之间的界限上。