Task-oriented dialogue generation is challenging since the underlying knowledge is often dynamic and effectively incorporating knowledge into the learning process is hard. It is particularly challenging to generate both human-like and informative responses in this setting. Recent research primarily focused on various knowledge distillation methods where the underlying relationship between the facts in a knowledge base is not effectively captured. In this paper, we go one step further and demonstrate how the structural information of a knowledge graph can improve the system's inference capabilities. Specifically, we propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model. Our proposed system views relational knowledge as a knowledge graph and introduces (1) a structure-aware knowledge embedding technique, and (2) a knowledge graph-weighted attention masking strategy to facilitate the system selecting relevant information during the dialogue generation. An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
翻译:以任务为导向的对话是具有挑战性的,因为基础知识往往是动态的,有效地将知识纳入学习过程是困难的,在这种环境下产生人性化和信息化的反应尤其具有挑战性。最近的研究主要侧重于各种知识蒸馏方法,这些方法没有有效地捕捉到知识库中事实之间的根本关系。在本文件中,我们进一步展示知识图的结构信息如何改善系统的推理能力。具体地说,我们提议DialoKG,这是一个新颖的任务性对话系统,有效地将知识纳入语言模型。我们提议的系统将关系知识视为一种知识图表,并引入:(1) 结构意识知识嵌入技术,和(2) 知识加权关注掩盖战略,以便利系统在对话生成过程中选择相关信息。经验评估表明DialoKG相对于若干标准基准数据集的最新方法的有效性。