Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems. Despite their success, existing methods often formulate this task as a cascaded generation problem which can lead to error accumulation across different sub-tasks and greater data annotation overhead. In this study, we present PPTOD, a unified plug-and-play model for task-oriented dialogue. In addition, we introduce a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora. We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification. Experimental results show that PPTOD achieves new state of the art on all evaluated tasks in both high-resource and low-resource scenarios. Furthermore, comparisons against previous SOTA methods show that the responses generated by PPTOD are more factually correct and semantically coherent as judged by human annotators.
翻译:培训前语言模式最近被证明有利于面向任务的对话(TOD)系统。尽管这些模式取得了成功,但现有的方法往往将这一任务发展成一个连锁生成的问题,可能导致不同子任务之间的错误积累和数据注释管理。在本研究中,我们介绍了PPTOD,这是面向任务的对话的一个统一的插头和功能模式。此外,我们引入了新的对话多任务前培训战略,使该模式能够从不同对话公司中学习主要的TOD任务完成技能。我们广泛测试了我们关于三种基准TOD任务的模型,包括端对端对话建模、对话状态跟踪和意图分类。实验结果表明,PPPPTOD在高资源和低资源情景下的所有评估任务上都取得了新的艺术状态。此外,与以前SOTA方法的比较表明,PTODD产生的答复在事实上更加正确,而且与人类说明者判断的判断更加一致。