Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment language models (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations. Then, our Relation-Aware Self-Attention module comprehensively integrates different modalities via the Cross-Modal Relative Position Bias, which guides information exchange between relevant entities of different modalities. We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE. On all the datasets, our method achieves state-of-the-art performance. Our code is available at http://github.com/mlvlab/QAT.
翻译:问题解答(QA)是一项需要针对自然语言背景进行推理的任务,许多相关作品都用图形神经网络(GNNs)强化语言模型(LMs)来编码知识图(KG)信息,然而,大多数现有的基于GNN的质量保证模块并不利用KGs丰富的关联信息,而是依赖LM和KG之间的有限信息互动。为了解决这些问题,我们提议了问答变换器(QAT),目的是以统一的方式联合解释语言和图表,从而统一实体关系。具体地说,QAT构建Meta-Path标志,学习基于不同结构和语义关系的以关系为核心的关系嵌入关系。然后,我们的Relation-Award自留模块通过交叉模式相对位置(Bias)全面整合不同模式,该模块指导不同模式相关实体之间的信息交流。我们验证QAT在常见问题上对通用数据集(如ComensenseQA和OpenBookQA)进行解释的有效性。关于医学问题解算的MDA-A-As-On-ODQA, 在医疗问题上,我们的数据解算。