Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging. Existing models treat knowledge selection as a sentence ranking or classification problem where each sentence is handled individually, ignoring the internal semantic connection among sentences in the background document. In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs. Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences. We jointly apply multi-task learning for sentence-level and concept-level knowledge selection and show that it improves sentence-level selection. Our experiments show that our semantic graph-based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE and improves generalization on unseen topics in WoW.
翻译:现有的模型将知识选择视为单项处理每一句的量刑等级或分类问题,忽略了背景文件中各句之间的内部语义联系。在这项工作中,我们提议将背景知识文件自动转换成文件语义图,然后对此类图表进行知识选择。我们的文件语义图通过使用句点保存了判决级别信息,并在各句之间提供了概念联系。我们共同应用多任务学习来进行判决级别和概念级别的知识选择,并表明它改进了判决级别的选择。我们的实验表明,基于语义图的知识选择在选择知识任务和Holle E的端对端反应生成任务方面,都比判决选择基线有所改进,提高了WWW对隐蔽主题的概括化。