Although data-free incremental learning methods are memory-friendly, accurately estimating and counteracting representation shifts is challenging in the absence of historical data. This paper addresses this thorny problem by proposing a novel incremental learning method inspired by human analogy capabilities. Specifically, we design an analogy-making mechanism to remap the new data into the old class by prompt tuning. It mimics the feature distribution of the target old class on the old model using only samples of new classes. The learnt prompts are further used to estimate and counteract the representation shift caused by fine-tuning for the historical prototypes. The proposed method sets up new state-of-the-art performance on four incremental learning benchmarks under both the class and domain incremental learning settings. It consistently outperforms data-replay methods by only saving feature prototypes for each class. It has almost hit the empirical upper bound by joint training on the Core50 benchmark. The code will be released at \url{https://github.com/ZhihengCV/A-Prompts}.
翻译:尽管无数据增量学习方法对内存友好,但在历史数据缺失的情况下,准确估计与对抗表示漂移是具有挑战性的。本文提出了一种受人类类比能力启发的增量学习方法,通过设计一个类比机制来实现新数据对旧类别的重新映射,以调整其表示。通过在新类别的样本上模仿旧模型目标旧类别的特征分布,此机制调整提示方式以实现表示的迁移,主要应用于历史原型的微调。该方法在类和域增量学习设置下为四个增量学习基准设置了新的最先进性能。它仅保存每个类别的特征原型,却比数据重放方法表现更好。在 Core50 数据集上的联合训练,该方法几乎达到了经验上限。代码将在 \url{https://github.com/ZhihengCV/A-Prompts} 中公布。