Recent studies try to build task-oriented dialogue systems in an end-to-end manner and the existing works make great progress on this task. However, there is still an issue need to be further considered, i.e., how to effectively represent the knowledge bases and incorporate that into dialogue systems. To solve this issue, we design a novel Transformer-based Context-aware Memory Generator to model the entities in knowledge bases, which can produce entity representations with perceiving all the relevant entities and dialogue history. Furthermore, we propose Context-aware Memory Enhanced Transformer (CMET), which can effectively aggregate information from the dialogue history and knowledge bases to generate more accurate responses. Through extensive experiments, our method can achieve superior performance over the state-of-the-art methods.
翻译:最近的研究试图以端到端的方式建立面向任务的对话系统,而现有的工作在这项任务上取得了很大进展,然而,仍有一个问题需要进一步审议,即如何有效地代表知识基础并将知识基础纳入对话系统。为了解决这个问题,我们设计了一个新型的基于变异器的背景感应记忆生成器,在知识库中作为实体的模型,这种模型可以产生实体代表,能够感应所有相关实体和对话历史。 此外,我们提议了“环境觉悟记忆增强变异器 ” ( CMET ), 它可以有效地汇总对话历史和知识库的信息,从而产生更准确的反应。 通过广泛的实验,我们的方法可以取得优于最先进的方法的效果。