End-to-end task-oriented dialog systems usually suffer from the challenge of incorporating knowledge bases. In this paper, we propose a novel yet simple end-to-end differentiable model called memory-to-sequence (Mem2Seq) to address this issue. Mem2Seq is the first neural generative model that combines the multi-hop attention over memories with the idea of pointer network. We empirically show how Mem2Seq controls each generation step, and how its multi-hop attention mechanism helps in learning correlations between memories. In addition, our model is quite general without complicated task-specific designs. As a result, we show that Mem2Seq can be trained faster and attain the state-of-the-art performance on three different task-oriented dialog datasets.
翻译:端到端任务导向的对话系统通常会遇到融合知识基础的挑战。 在本文中,我们提出了一个创新而简单、端到端的不同模型,叫做内存到序列(Mem2Seq),以解决这一问题。Mem2Seq是第一个将多希望对记忆的关注与指针网络概念相结合的神经基因模型。我们从经验上展示了Mem2Seq如何控制每一代人的一步,以及它的多希望关注机制如何帮助学习记忆之间的关联。此外,我们的模型相当笼统,没有复杂的任务设计。结果,我们显示Mem2Seq可以更快地接受培训,在三个不同任务导向的对话数据集上达到最先进的表现。