Continual learning refers to the capability of continuously learning from a stream of data. Current research mainly focuses on relieving catastrophic forgetting, and most of their success is at the cost of limiting the performance of newly incoming tasks. Such a trade-off is referred to as the stabilityplasticity dilemma and is a more general and challenging problem for continual learning. However, the inherent conflict between these two concepts makes it seemingly impossible to devise a satisfactory solution to both of them simultaneously. Therefore, we ask, "is it possible to divide them into two problems to conquer independently?" To this end, we propose a prompt-tuning-based method termed PromptFusion to enable the decoupling of stability and plasticity. Specifically, PromptFusion consists of a carefully designed Stabilizer module that deals with catastrophic forgetting and a Booster module to learn new knowledge concurrently. During training, PromptFusion first passes an input image to the two modules separately. Then the resulting logits are further fused with a learnable weight parameter. Finally, a weight mask is applied to the derived logits to balance between old and new classes. Extensive experiments show that our method achieves promising results on popular continual learning datasets for both class-incremental and domain incremental settings. Especially on Split-Imagenet-R, one of the most challenging datasets for class-incremental learning, our method exceeds state-of-the-art prompt-based methods L2P and DualPrompt by more than 10%.
翻译:持续学习是指从数据流中不断学习的能力。 目前的研究主要侧重于缓解灾难性的遗忘, 其成功与否大多以限制新任务的执行为代价。 这种权衡被称作稳定塑料的两难困境, 是一个更普遍和更具挑战性的不断学习问题。 然而, 这两个概念之间的内在冲突似乎无法同时为这两个概念设计一个令人满意的解决方案。 因此, 我们问, “ 是否有可能将它们分为两个独立征服的问题? ” 。 为此, 我们提议一个快速调试法, 称为“ 快速调试”, 以方便稳定与可塑性脱钩。 具体地说, 快速调试包括一个精心设计的稳定剂模块, 处理灾难性的遗忘, 是一个同时学习新知识的推动器模块。 在培训期间, 快速调试首先将一个输入图像传送到两个模块。 然后, 由此产生的日志与一个可学习的重量参数进一步融合。 最后, 将一个重量掩码应用到衍生的对旧班和新班之间的平衡。 广泛的实验显示, 我们的方法, 最具有挑战性的域域域域, 快速地, 将一个系统, 快速的路径, 将 升级的路径, 升级, 升级 升级的, 升级的路径,, 升级的 升级的</s>