This work introduces a new multi-task, parameter-efficient language model (LM) tuning method that learns to transfer knowledge across different tasks via a mixture of soft prompts-small prefix embedding vectors pre-trained for different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt Tuning), obtains source prompts as encodings of large-scale source tasks into a small number of parameters and trains an attention module to interpolate the source prompts and a newly initialized target prompt for every instance in the target task. During training, only the target task prompt and the attention weights, which are shared between tasks in multi-task training, are updated, while the original LM and source prompts are intact. ATTEMPT is highly parameter-efficient (e.g., updates 2,300 times fewer parameters than full fine-tuning) while achieving high task performance using knowledge from high-resource tasks. Moreover, it is modular using pre-trained soft prompts, and can flexibly add or remove source prompts for effective knowledge transfer. Our experimental results across 21 diverse NLP datasets show that ATTEMPT significantly outperforms prompt tuning and outperforms or matches fully fine-tuned or other parameter-efficient tuning approaches that use over ten times more parameters. Finally, ATTEMPT outperforms previous work in few-shot learning settings.
翻译:这项工作引入了新的多任务、 参数效率语言模型( LM) 调制方法, 学习如何通过多种任务前训练的软提示- 小型前缀嵌入矢量的混合, 传递不同任务的知识。 我们的方法叫做 ATTEPT( 快速调试的属性混合), 获取源代码提示, 将大型源任务编码成少量参数, 并训练一个关注模块, 将源代码提示和新启动的目标插入到目标任务中。 培训期间, 仅更新目标任务提示和关注权重, 在多任务培训中共享的目标任务提示和关注权重, 而原始 LMTAPT 和源提示是完好的。 ATMEPT是高参数效率的( 例如, 更新比全面调整的参数少2 300倍 ), 同时使用高资源任务知识实现高的任务性业绩。 此外, 它是模块化的, 使用预先培训的软提示, 可以灵活添加或删除源代码, 以有效知识转移。 我们21个不同的 NLPTAST- 的实验结果, 更新了其它快速调整的校正 。