Relation linking is essential to enable question answering over knowledge bases. Although there are various efforts to improve relation linking performance, the current state-of-the-art methods do not achieve optimal results, therefore, negatively impacting the overall end-to-end question answering performance. In this work, we propose a novel approach for relation linking framing it as a generative problem facilitating the use of pre-trained sequence-to-sequence models. We extend such sequence-to-sequence models with the idea of infusing structured data from the target knowledge base, primarily to enable these models to handle the nuances of the knowledge base. Moreover, we train the model with the aim to generate a structured output consisting of a list of argument-relation pairs, enabling a knowledge validation step. We compared our method against the existing relation linking systems on four different datasets derived from DBpedia and Wikidata. Our method reports large improvements over the state-of-the-art while using a much simpler model that can be easily adapted to different knowledge bases.
翻译:联系关系对于回答知识基础的问题至关重要。虽然在改进联系联系方面做出了各种努力,但目前的先进方法并没有取得最佳结果,因此对总体端对端回答问题回答的绩效产生了负面影响。在这项工作中,我们提出一种新的关系联系方法,将它作为促进使用预先训练的序列到序列模型的基因化问题来设置。我们扩展了这种序列到序列模式,将来自目标知识库的结构化数据输入到结构化数据的想法,主要是使这些模型能够处理知识基础的细微差别。此外,我们培训模型的目的是产生结构化产出,包括一系列的参数关系对等,从而促成知识验证步骤。我们比较了我们的方法与从DBpedia和Wikigata得到的四种不同数据集的现有关系连接系统。我们的方法报告说,在使用易于适应不同知识基础的简单模型的同时,比最新技术有了很大的改进。