Existing KG-augmented models for question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs). However, they ignore (i) the effectively fusing and reasoning over question context representations and the KG representations, and (ii) automatically selecting relevant nodes from the noisy KGs during reasoning. In this paper, we propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LMs and GNNs and the dynamic KGs pruning mechanism. Specifically, JointLK performs joint reasoning between the LMs and the GNNs through a novel dense bidirectional attention module, in which each question token attends on KG nodes and each KG node attends on question tokens, and the two modal representations fuse and update mutually by multi-step interactions. Then, the dynamic pruning module uses the attention weights generated by joint reasoning to recursively prune irrelevant KG nodes. Our results on the CommonsenseQA and OpenBookQA datasets demonstrate that our modal fusion and knowledge pruning methods can make better use of relevant knowledge for reasoning.
翻译:用于回答问题的现有的 KG 推荐模式主要侧重于设计完善的图形神经网络(GNNS) 以模拟知识图形(KGs) 。 但是,它们忽略了 (一) 问题背景表达和 KG 表示的有效引信和推理,以及 (二) 在推理过程中自动从吵闹的 KG 表示中选择相关节点。 在本文件中,我们提出了一个新颖的模式,即 United LK, 通过LMs和GNs以及动态 KGs 调试机制的联合推理来解决上述限制。 具体地说, 联合LK 在一个新型的密集双向关注模块中, 由 LMs 和 GNS 进行联合推理, 在 LMs 和 GGs prunning 键之间进行联合推理, 联合LKKKK 进行联合推理, 具体地说, 我们关于 ComsenseQA 和 Opook QA 数据集的结果显示, 我们的模型和知识推算方法的推理方法更加相关。