We introduce Progressive Prompts - a simple and efficient approach for continual learning in language models. Our method allows forward transfer and resists catastrophic forgetting, without relying on data replay or a large number of task-specific parameters. Progressive Prompts learns a new soft prompt for each task and sequentially concatenates it with the previously learned prompts, while keeping the base model frozen. Experiments on standard continual learning benchmarks show that our approach outperforms state-of-the-art methods, with an improvement >20% in average test accuracy over the previous best-preforming method on T5 model. We also explore a more challenging continual learning setup with longer sequences of tasks and show that Progressive Prompts significantly outperforms prior methods.
翻译:我们引入了进步提示- 一种简单而有效的语言模型持续学习方法。 我们的方法允许前向传输并避免灾难性的遗忘, 而不依赖于数据重放或大量任务特定参数。 进步提示为每项任务学习新的软提示, 并依次将其与先前学到的提示相连接, 同时保持基模冻结。 标准持续学习基准实验显示,我们的方法优于最先进的方法, 平均测试精度为20%, 高于以前在T5模型上最先进的方法。 我们还探索了更具有挑战性的连续学习设置, 其任务顺序更长, 并显示进步提示明显地超越了先前的方法 。