As labeling cost for different modules in task-oriented dialog (ToD) systems is high, a major challenge in practice is to learn different tasks with the least amount of labeled data. Recently, prompting methods over pre-trained language models (PLMs) have shown promising results for few-shot learning in ToD. To better utilize the power of PLMs, this paper proposes Comprehensive Instruction (CINS) that exploits PLMs with extra task-specific instructions. We design a schema(definition, constraint, prompt) of instructions and their customized realizations for three important downstream tasks in ToD, i.e. intent classification, dialog state tracking, and natural language generation. A sequence-to-sequence model (T5)is adopted to solve these three tasks in a unified framework. Extensive experiments are conducted on these ToD tasks in realistic few-shot learning scenarios with small validation data. Empirical results demonstrate that the proposed CINS approach consistently improves techniques that finetune PLMs with raw input or short prompts.
翻译:由于任务导向对话(ToD)系统中不同模块的标签成本很高,在实践中,一个重大挑战是学习使用最少标签数据的不同任务。最近,对预先培训的语言模型(PLMs)的催化方法已经为在ToD中进行微小的学习展示出有希望的结果。为了更好地利用PLM的力量,本文件建议全面指导(CINS),利用额外的任务指令来开发PLMs。我们设计了一种说明的系统(定义、限制、迅速)及其针对托D中三大下游任务(即意图分类、对话状态跟踪和自然语言生成)的定制实现,即意图分类、对话状态跟踪和自然语言生成。采用了从顺序到顺序的模式(T5)在一个统一的框架内解决这三项任务。在现实的、少发的学习情景下,用小的验证数据对这些任务进行了广泛的实验。“经验”结果表明,拟议的CINS方法不断改进以原始输入或短提示微的微的微微微的PLMs技术。