Recently, pretrained language models (PLMs) have made exceptional success in language generation. To leverage the rich knowledge encoded by PLMs, a simple yet powerful mechanism is to use prompts, in the form of either discrete tokens or continuous embeddings. In existing studies, manual prompts are time-consuming and require domain expertise, while continuous prompts are typically independent of the inputs. To address this issue, we propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation. Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation. We refer to such prompts as contextualized prompts. Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation by modeling an inverse generation process from output to input. Moreover, we propose a lightweight contexttuning, fine-tuning only 0.4% of parameters while retaining well performance.
翻译:最近,经过培训的语言模型(PLMs)在语言生成方面取得了非凡的成功。为了利用由PLM公司编码的丰富知识,一个简单而有力的机制是使用提示,无论是离散的象征物还是连续嵌入物。在现有研究中,人工提示费时,需要领域专门知识,而连续提示通常与投入无关。为解决这一问题,我们建议采取新的持续催动方法,称为“背景引导”,对自然语言生成的PLM进行微调。首先,根据输入文本来生成提示,以便从PLM公司获取有用的知识,从而让其代代代获得有用的知识。我们提到“背景提示”等提示。第二,为了进一步加强生成的文本与投入的相关性,我们利用不断的提示来改进自然语言生成过程,从产出到投入的反生成过程建模。此外,我们建议一种轻度的上下文调整,微调仅0.4%的参数,同时保持良好的性能。