Commonsense question-answering (QA) methods combine the power of pre-trained Language Models (LM) with the reasoning provided by Knowledge Graphs (KG). A typical approach collects nodes relevant to the QA pair from a KG to form a Working Graph (WG) followed by reasoning using Graph Neural Networks(GNNs). This faces two major challenges: (i) it is difficult to capture all the information from the QA in the WG, and (ii) the WG contains some irrelevant nodes from the KG. To address these, we propose GrapeQA with two simple improvements on the WG: (i) Prominent Entities for Graph Augmentation identifies relevant text chunks from the QA pair and augments the WG with corresponding latent representations from the LM, and (ii) Context-Aware Node Pruning removes nodes that are less relevant to the QA pair. We evaluate our results on OpenBookQA, CommonsenseQA and MedQA-USMLE and see that GrapeQA shows consistent improvements over its LM + KG predecessor (QA-GNN in particular) and large improvements on OpenBookQA.
翻译:常识问答方法结合了预训练语言模型(LM)和知识图的推理能力。典型方法从知识图中收集有关QA的节点以形成工作图(WG),然后使用图神经网络(GNN)进行推理。这面临两个主要挑战:(i)难以从WG中捕获QA中的所有信息,以及(ii)WG包含来自知识图的一些不相关节点。为了解决这些问题,我们提出了GrapeQA,对WG进行了两个简单的改进:(i)用于图增强的突出实体,可以识别QA对中的相关文本块,并使用相应的LM潜在表示增强WG,以及(ii)上下文感知节点修剪可以去除不太相关的节点。我们在OpenBookQA、CommonsenseQA和MedQA-USMLE上评估了结果,并看到GrapeQA相对于先前的LM + KG的改进(特别是QA-GNN)以及在OpenBookQA上的大幅提高。