Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. Popular incremental learning methods mitigate such forgetting by retaining a subset of previously seen samples and replaying them during the training on subsequent tasks. However, this is not always possible, e.g., due to data protection regulations. In such restricted scenarios, one can employ generative models to replay either artificial images or hidden features to a classifier. In this work, we propose Genifer (GENeratIve FEature-driven image Replay), where a generative model is trained to replay images that must induce the same hidden features as real samples when they are passed through the classifier. Our technique therefore incorporates the benefits of both image and feature replay, i.e.: (1) unlike conventional image replay, our generative model explicitly learns the distribution of features that are relevant for classification; (2) in contrast to feature replay, our entire classifier remains trainable; and (3) we can leverage image-space augmentations, which increase distillation performance while also mitigating overfitting during the training of the generative model. We show that Genifer substantially outperforms the previous state of the art for various settings on the CIFAR-100 and CUB-200 datasets.
翻译:在培训不同任务时,神经网络容易被灾难性地遗忘。 普通的渐进式学习方法通过保留一组以前见过的样本并在随后的任务培训中重新播放来减轻这种遗忘。 但是,由于数据保护条例等原因,这并不总是可能的。 在这种限制的情景中,人们可以使用基因模型将人工图像或隐藏特征重新播放到分类器中。 在这项工作中,我们提议Genifer(Generarative Faturn-Eture驱动图像重放)进行基因化模型培训,以重新播放图像,这些图像在通过分类器时必须产生与真实样本相同的隐藏特征。 因此,我们的技术结合了图像和特征重放的好处,即:(1) 不同于传统的图像重放,我们的基因化模型明确了解与分类相关的特征的分布情况;(2) 与特征重放相反,我们的整个分类器仍然可以训练;(3) 我们可以利用图像-空间增强功能,提高蒸馏性,同时减轻基因化模型培训期间的过度使用。 我们显示,Genifer- 200- FAR设置了以前各种艺术状态。