Response generation is one of the critical components in task-oriented dialog systems. Existing studies have shown that large pre-trained language models can be adapted to this task. The typical paradigm of adapting such extremely large language models would be by fine-tuning on the downstream tasks which is not only time-consuming but also involves significant resources and access to fine-tuning data. Prompting \citep{schick2020exploiting} has been an alternative to fine-tuning in many NLP tasks. In our work, we explore the idea of using prompting for response generation in task-oriented dialog systems. Specifically, we propose an approach that performs \textit{contextual dynamic prompting} where the prompts are learnt from dialog contexts. We aim to distill useful prompting signals from the dialog context. On experiments with MultiWOZ 2.2 dataset \cite{zang2020multiwoz}, we show that contextual dynamic prompts improve response generation in terms of \textit{combined score} \cite{mehri-etal-2019-structured} by 3 absolute points, and a massive 20 points when dialog states are incorporated. Furthermore, human annotation on these conversations found that agents which incorporate context were preferred over agents with vanilla prefix-tuning.
翻译:生成反应是任务导向对话系统的关键组成部分之一。 现有的研究表明, 大型培训前语言模式可以适应此任务。 调整如此庞大语言模式的典型范例是微调下游任务,这不仅耗时,而且需要大量资源和获取微调数据。 提示 \ citep{schick202020Extracting} 是许多任务中微调的替代方案。 我们在工作中探索了在任务导向对话系统中利用快速响应生成的理念。 具体地说, 我们提出一种在通过对话环境中学习提示时, 执行\ textit{ extextual 动态提示的方法。 我们的目标是从对话中提取有用的提示信号。 在与多 WOZ2.2 数据集的实验中, 提示 \ cite{zang2020 multwoz}, 我们显示, 环境动态能改善反应生成的功能, 包括\ textitle{ meri- 2019- 结构化 } 3 绝对点, 以及大量20 点在对话过程中, 将这些代理器并入了 。