The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. In this work, we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph neural networks. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.
翻译:使用预先培训的语言模型和知识图表(KGs)的知识回答问题的问题提出了两个挑战:(a) 质量评估背景(问答选择),方法需要:(一) 确定来自大型KGs的相关知识,(二) 共同推理质量评估背景和KG。 在这项工作中,我们提出了一个新的模型,QA-GNN,通过两项关键创新解决上述挑战:(一) 相关性评分,我们利用质量评估角度来估计KG节点相对于特定质量评估背景的重要性;(二) 联合推理,我们将质量评估背景与KG联系起来,以形成一个联合图表,并通过图形神经网络相互更新其表达方式。 我们评估通用质量评估和OpenBookQA数据集的QA-GNN,并展示其与现有LM和LM+KG模型相比的改进,以及其进行可解释和结构推理的能力,例如正确处理问题否定问题的能力。