Can language models (LM) ground question-answering (QA) tasks in the knowledge base via inherent relational reasoning ability? While previous models that use only LMs have seen some success on many QA tasks, more recent methods include knowledge graphs (KG) to complement LMs with their more logic-driven implicit knowledge. However, effectively extracting information from structured data, like KGs, empowers LMs to remain an open question, and current models rely on graph techniques to extract knowledge. In this paper, we propose to solely leverage the LMs to combine the language and knowledge for knowledge based question-answering with flexibility, breadth of coverage and structured reasoning. Specifically, we devise a knowledge construction method that retrieves the relevant context with a dynamic hop, which expresses more comprehensivenes than traditional GNN-based techniques. And we devise a deep fusion mechanism to further bridge the information exchanging bottleneck between the language and the knowledge. Extensive experiments show that our model consistently demonstrates its state-of-the-art performance over CommensenseQA benchmark, showcasing the possibility to leverage LMs solely to robustly ground QA into the knowledge base.
翻译:语言模型(LM)能否通过内在关系推理能力在知识库中进行地面问答任务?虽然以前仅使用LMs的模型在很多质量评估任务上取得了一定的成功,但较新的方法包括知识图表(KG),以更符合逻辑的隐含知识补充LMs;然而,从结构化数据(如KGs)中有效地提取信息,使LMs能够保持开放性,而目前的模型则依靠图表技术来获取知识。在本文中,我们提议只利用LMs来利用语言和知识来结合基于知识的回答问题,以灵活性、覆盖面广度和结构化推理。具体地说,我们设计了一种知识构建方法,用动态跳来检索相关环境,这比传统的GNN技术更加全面。我们还设计了一种深层次的融合机制,以进一步连接语言和知识之间的交流瓶颈信息。广泛的实验表明,我们的模型一贯地展示其相对于ComensQA基准的最新性表现,展示了将LMs完全用于地面知识基地的可能性。