Multi-hop Knowledge Base Question Answering(KBQA) aims to find the answer entity in a knowledge base which is several hops from the topic entity mentioned in the question. Existing Retrieval-based approaches first generate instructions from the question and then use them to guide the multi-hop reasoning on the knowledge graph. As the instructions are fixed during the whole reasoning procedure and the knowledge graph is not considered in instruction generation, the model cannot revise its mistake once it predicts an intermediate entity incorrectly. To handle this, we propose KBIGER(Knowledge Base Iterative Instruction GEnerating and Reasoning), a novel and efficient approach to generate the instructions dynamically with the help of reasoning graph. Instead of generating all the instructions before reasoning, we take the (k-1)-th reasoning graph into consideration to build the k-th instruction. In this way, the model could check the prediction from the graph and generate new instructions to revise the incorrect prediction of intermediate entities. We do experiments on two multi-hop KBQA benchmarks and outperform the existing approaches, becoming the new-state-of-the-art. Further experiments show our method does detect the incorrect prediction of intermediate entities and has the ability to revise such errors.
翻译:多跳知识库问题解答(KBQA) 旨在从问题中提到的主题实体的几处跳出的知识库中找到答案实体。 现有的检索方法首先从问题中产生指示,然后用它们来指导知识图上的多跳推理。 由于整个推理程序期间指示是固定的,而知识图在教学生成中不考虑,模型无法在对中间实体作出错误预测时纠正错误。 为了处理这个问题,我们提议KBIGER(知识化基础迭代教学的发音和解释), 一种创新和有效的方法, 借助推理图, 动态生成指示。 我们不用在推理前生成所有指示, 而是将( k-1) 第 推理图纳入考虑, 以构建 k-th 指示。 这样, 模型可以检查图表中的预测, 并产生新的指示, 以修改中间实体的错误预测。 我们实验了两个多跳 KBQA 基准, 超越了现有方法, 成为新状态。 进一步实验显示我们的方法能够检测不正确的中间实体。