Knowledge Graph Question Answering (KGQA) involves retrieving entities as answers from a Knowledge Graph (KG) using natural language queries. The challenge is to learn to reason over question-relevant KG facts that traverse KG entities and lead to the question answers. To facilitate reasoning, the question is decoded into instructions, which are dense question representations used to guide the KG traversals. However, if the derived instructions do not exactly match the underlying KG information, they may lead to reasoning under irrelevant context. Our method, termed ReaRev, introduces a new way to KGQA reasoning with respect to both instruction decoding and execution. To improve instruction decoding, we perform reasoning in an adaptive manner, where KG-aware information is used to iteratively update the initial instructions. To improve instruction execution, we emulate breadth-first search (BFS) with graph neural networks (GNNs). The BFS strategy treats the instructions as a set and allows our method to decide on their execution order on the fly. Experimental results on three KGQA benchmarks demonstrate the ReaRev's effectiveness compared with previous state-of-the-art, especially when the KG is incomplete or when we tackle complex questions. Our code is publicly available at https://github.com/cmavro/ReaRev_KGQA.
翻译:知识图解答( KGQA ) 涉及从使用自然语言查询的知识图( KG) 中检索实体的答案。 挑战是如何学会如何解释与问题相关的KG事实, 这些事实贯穿着 KG 实体, 并导致问题解答。 为了便于推理, 这个问题被解码成指示, 它们是用于指导 KG Trainersal 的密集问题表达器。 但是, 如果导出的指示不完全符合基本 KG 信息, 它们可能导致在不相关的背景下进行推理 。 我们的方法叫做 ReaRev, 引入了KGQA 在教学解码和执行方面进行推理的新方法。 为了改进教学解码, 我们用KG-aware信息反复更新初始指示。 为了改进教学执行, 我们用图形神经网络( GNNS ) 进行宽度第一次搜索( BFS) 。 BFS 战略将这些指示当作一个设置, 并允许我们决定其执行命令的方法 。 在三个 KGQA 基准上实验结果, 当我们无法完全使用 KG/Requal 时, 当我们无法使用之前的 KG/ requal 解算时, 。