Answering open-domain questions requires world knowledge about in-context entities. As pre-trained Language Models (LMs) lack the power to store all required knowledge, external knowledge sources, such as knowledge graphs, are often used to augment LMs. In this work, we propose knOwledge REasOning empowered Language Model (OREO-LM), which consists of a novel Knowledge Interaction Layer that can be flexibly plugged into existing Transformer-based LMs to interact with a differentiable Knowledge Graph Reasoning module collaboratively. In this way, LM guides KG to walk towards the desired answer, while the retrieved knowledge improves LM. By adopting OREO-LM to RoBERTa and T5, we show significant performance gain, achieving state-of-art results in the Closed-Book setting. The performance enhancement is mainly from the KG reasoning's capacity to infer missing relational facts. In addition, OREO-LM provides reasoning paths as rationales to interpret the model's decision.
翻译:回答开放域的问题需要世界对正文实体的了解。由于预先培训的语文模型缺乏储存所有所需知识的权力,因此往往使用知识图等外部知识来源来增加LM。在这项工作中,我们提议使用knOwledge REasing 增强功能语言模型(OREO-LM),该模型由一个新的知识互动层组成,可以灵活地插入现有的以变异器为基础的LMS,与一个不同的知识图表说明模块协作互动。这样,LM引导KG走向理想的答案,而获得的知识则改进LM。我们通过将OREO-LM应用到ROBERTA和T5,我们显示了显著的业绩收益,在闭路电脑设置中取得了最新成果。提高绩效主要来自KG推理推理能力,以推断缺失关联事实。此外,OREO-LM提供了推理路径,作为解释模型决定的理由。