Maintaining engagement and consistency is particularly important in dialogue systems. Existing works have improved the performance of dialogue systems by intentionally learning interlocutor personas with sophisticated network structures. One issue with this approach is that it requires more personal corpora with annotations. Additionally, these models typically perform the next utterance prediction to generate a response but neglect the discourse coherence in the entire conversation. To address these issues, this study proposes a method of learning to memorize entailment and discourse relations for persona-consistent dialogue tasks. Entailment text pairs in natural language inference dataset were applied to learn latent entailment relations as external memories by premise-to-hypothesis generation task. Furthermore, an internal memory with a similar architecture was applied to the discourse information in the dialogue. Placing orthogonality restrictions on these two memory spaces ensures that the latent entailment relations remain dialogue-independent. Both memories collaborate to obtain entailment and discourse representation for the generation, allowing a deeper understanding of both consistency and coherence. Experiments on two large public datasets, PersonaChat and DSTC7-AVSD, demonstrated the effectiveness of the proposed method. Both automatic and human evaluations indicate that the proposed model outperforms several strong baselines in terms of both persona consistency and response coherence. Our source code is available at https://github.com/Chenrj233/LMEDR.
翻译:在对话系统中,保持接触和一致性尤其重要。现有的工作通过有意学习具有尖端网络结构的对话者,改善了对话系统的绩效。这一方法的一个问题是,它要求更多的个人团体提供说明。此外,这些模型通常进行下一个发话预测,以产生反应,但忽视整个对话的谈话一致性。为解决这些问题,本研究报告建议了一种方法,学习如何为人与人之间和谐的对话任务记住包含和交谈关系;自然语言推断数据集的文本配对,用于通过前置至合制版生成任务学习作为外部记忆的潜在隐含关系。此外,对对话中的对话信息应用了类似结构的内部记忆。对这两个记忆空间的配置或调整限制确保潜在的隐含关系仍然依赖于对话。两种记忆都为下一代提供了隐含和对话关系的代表,从而能够更深入地理解一致性和一致性。对两大公共数据集“人与人”和DSTC7-AVSD的实验,展示了拟议方法的一致性。两种方法都是自动的和人类源代码,表明我们拟议的一致性的基线。