In previous research, knowledge selection tasks mostly rely on language model-based methods or knowledge ranking. However, approaches simply rely on the language model take all knowledge as sequential input that knowledge does not contain sequential information in most circumstances. On the other hand, the knowledge ranking method leverage dialog history and each given knowledge but not between pieces of knowledge. In the 10th Dialog System Technology Challenges (DSTC 10), we participated the second track of Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations. To deal with the problems mentioned above, we modified training methods based on SOTA models for the first and third sub-tasks and proposed Graph-Knowledge Selector (GKS), utilizing a graph-attention base model incorporated with language model for knowledge selection sub-task two. GKS makes knowledge selection decisions in the dialog by simultaneously considering each knowledge embedding generated from the language model, without sequential features. GKS also leverages considerable knowledge in the decision-making, takes relations across knowledge as a part of the selection process. GKS outperforms several SOTA models proposed in the data-set on knowledge selection from the 9th Dialog System Technology Challenges (DSTC9).
翻译:在先前的研究中,知识选择任务主要依靠语言模式方法或知识排名。然而,方法只是依靠语言模式,将所有知识作为知识在多数情况下并不包含相继信息的顺序输入。另一方面,知识排序方法利用了对话历史和每个给定知识,而不是知识的分数。在第十个对口系统技术挑战(DSTC 10)中,我们参加了第二个轨道,即基于知识的基于任务的对话模式;为了处理上述问题,我们根据SOTA模式对第一和第二次任务和拟议的图表知识选择器(GKS)进行了培训方法的修改,利用了与语言模式相结合的图形关注基数模型,用于知识选择子任务2。GKS在对话中作出知识选择决定,同时考虑语言模式产生的每一项知识嵌入,而没有顺序特征。GKS在决策中也利用了相当多的知识,将各种知识作为选择过程的一部分。GKSS超越了在从 9号对口系统选择知识的数据集中提议的几个SOTA模型。