The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. In this work, we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph neural networks. We evaluate our model on QA benchmarks in the commonsense (CommonsenseQA, OpenBookQA) and biomedical (MedQA-USMLE) domains. QA-GNN outperforms existing LM and LM+KG models, and exhibits capabilities to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.
翻译:使用预先培训的语言模型和知识图表(KGs)的知识回答问题的问题提出了两个挑战:(一) 鉴于质量保证的背景(问题和答案选择),方法需要:(一) 确定来自大型KGs的相关知识,并(二) 对质量保证背景和KG进行联合推理。 在这项工作中,我们提出了一个新的模型,QA-GNN, 通过两项关键创新解决上述挑战:(一) 相关性评分,我们使用LMs来估计KG节点相对于特定质量保证背景的重要性,以及(二) 联合推理,我们将QA背景和KG联系起来,以形成一个联合图表,并通过图形神经网络相互更新其表述。我们评估了我们在共同意识(ComonsenseQA、OpenBookQA)和生物医学(MedQA-USMLE)领域的QA基准模型。QA-GNN优于现有的LM和LM+KG模型,以及进行解释和结构推理的能力,例如正确处理问题。