Recent studies try to build task-oriented dialogue system in an end-to-end manner and the existing works make great progress on this task. However, there are still two issues need to consider: (1) How to effectively represent the knowledge bases and incorporate it into dialogue system. (2) How to efficiently reason the knowledge bases given queries. To solve these issues, we design a novel Transformer-based Dynamic Memory Network (DMN) with a novel Memory Mask scheme, which can dynamically generate the context-aware knowledge base representations, and reason the knowledge bases simultaneously. Furthermore, we incorporate the dynamic memory network into Transformer and propose Dynamic Memory Enhanced Transformer (DMET), which can aggregate information from dialogue history and knowledge bases to generate better responses. Through extensive experiments, our method can achieve superior performance over the state-of-the-art methods.
翻译:最近的研究试图以端到端的方式建立以任务为导向的对话系统,而现有的工作在这项任务上取得了很大进展,然而,仍有两个问题需要考虑:(1) 如何有效地代表知识基础并将其纳入对话系统;(2) 如何有效地解释知识基础;(2) 为解决这些问题,我们设计了一个新型的以变换器为基础的动态记忆网络(DMN ), 其新颖的记忆面具计划可以动态地生成有背景的知识基础,并同时解释知识基础的理由;此外,我们把动态记忆网络纳入变异器,并提议一个动态记忆增强变异器(DMET),它可以汇总来自对话历史和知识基础的信息,从而产生更好的反应;通过广泛的实验,我们的方法可以取得优于最先进的方法的优异性表现。