Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in a data-scarce situation. Therefore, it is non-trivial to develop a general and lightweight model that can adapt to various text generation tasks based on PLMs. To fulfill this purpose, the recent prompt-based learning offers a potential solution. In this paper, we improve this technique and propose a novel prompt-based method (PTG) for text generation in a transferable setting. First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks. To consider both task- and instance-level information, we design an adaptive attention mechanism to derive the target prompts. For each data instance, PTG learns a specific target prompt by attending to highly relevant source prompts. In extensive experiments, PTG yields competitive or better results than fine-tuning methods. We release our source prompts as an open resource, where users can add or reuse them to improve new text generation tasks for future research. Code and data can be available at https://github.com/RUCAIBox/Transfer-Prompts-for-Text-Generation.
翻译:预先培训的语言模型(PLMs)在通过微调生成文本的任务方面取得了显著进展。虽然在数据萎缩的情况下微调PLMs是很难改进的,因此,开发一个能够适应基于PLM的不同文本生成任务的一般和轻量模式并非三重,为了达到这一目的,最近的快速学习提供了潜在的解决办法。在本文件中,我们改进了这一技术,提出了在可转让环境中生成文本的新型快速方法(PTG)。首先,PTG学习了一套各种源生成任务的源提示,然后将这些提示作为目标提示传输,以完成目标生成任务。为了考虑任务级和实例级的信息,我们设计了一个适应性关注机制,以获得目标提示。就每个数据而言,PTGG通过关注高度相关的源提示来学习一个具体目标。在广泛的实验中,PTGG产生有竞争力或更好的结果,而不是微调方法。我们将源提示作为开放资源发布,用户可以在其中添加或再利用它们来改进未来研究的新文本生成任务。我们设计了一个适应/TRABS/TRADRADR/DRDOLD数据可以在 http/GRUBADRBA/G/DRV/DRV/DRV/DRV/DRUDOLVDLDDDDDAS可以提供数据。