Existing studies in conversational AI mostly treat task-oriented dialog (TOD) and question answering (QA) as separate tasks. Towards the goal of constructing a conversational agent that can complete user tasks and support information seeking, it is important to build a system that handles both TOD and QA with access to various external knowledge. In this work, we propose a new task, Open-Book TOD (OB-TOD), which combines TOD with QA task and expand external knowledge sources to include both explicit knowledge sources (e.g., the Web) and implicit knowledge sources (e.g., pre-trained language models). We create a new dataset OB-MultiWOZ, where we enrich TOD sessions with QA-like information seeking experience grounded on external knowledge. We propose a unified model OPERA (Open-book End-to-end Task-oriented Dialog) which can appropriately access explicit and implicit external knowledge to tackle the defined task. Experimental results demonstrate OPERA's superior performance compared to closed-book baselines and illustrate the value of both knowledge types.
翻译:对话性AI的现有研究大多将面向任务的对话(TOD)和答答(QA)作为单独的任务。为了建立一个能够完成用户任务和支持信息搜索的对话代理机构的目标,我们必须建立一个既处理TOD又处理质量A的系统,以获得各种外部知识。在这项工作中,我们提议一个新的任务,即开放-Book-TOD(OB-TOD),将TOD与质量A的任务结合起来,并扩大外部知识来源,既包括明确的知识来源(例如网络),也包括隐含的知识来源(例如预先培训的语言模型)。我们创建了新的数据集OB-MultiWOZ,我们用类似于QA的信息丰富了技术开发会议,以外部知识为基础寻找经验。我们提出了统一的OPERA模型(OPERA(Open-book-end-end-lead-leg-d-d-dialog)),这可以适当地获取明确的和隐含的外部知识来完成确定的任务。实验结果显示OPERA相对于封闭式基线的优异性业绩,并说明了两种知识类型的价值。