In this paper, we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs. We experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs, our logical form of choice. Our proposed model generally outperforms the other models on two QA datasets over the DBpedia knowledge graph, evaluated in different settings. In addition, we show that transfer learning from the larger of those QA datasets to the smaller dataset yields substantial improvements, effectively offsetting the general lack of training data.
翻译:在本文中,我们对用于回答知识图表的复杂问题的复杂问题的任务的神经查询图表排序方法进行了实证调查。我们实验了六种不同的排名模型,并提出了一个基于自我注意的新颖的位置匹配模型,该模型利用了查询图表的内在结构,即我们逻辑选择的形式。我们提议的模型通常优于在不同环境中评估的DBpedia知识图表上两个质量评估数据集的其他模型。此外,我们表明,从较大的质量评估数据集向较小数据集的转移,可以带来实质性的改进,有效地弥补培训数据普遍缺乏的情况。