Understanding narratives requires reasoning about implicit world knowledge related to the causes, effects, and states of situations described in text. At the core of this challenge is how to access contextually relevant knowledge on demand and reason over it. In this paper, we present initial studies toward zero-shot commonsense question answering by formulating the task as inference over dynamically generated commonsense knowledge graphs. In contrast to previous studies for knowledge integration that rely on retrieval of existing knowledge from static knowledge graphs, our study requires commonsense knowledge integration where contextually relevant knowledge is often not present in existing knowledge bases. Therefore, we present a novel approach that generates contextually-relevant symbolic knowledge structures on demand using generative neural commonsense knowledge models. Empirical results on two datasets demonstrate the efficacy of our neuro-symbolic approach for dynamically constructing knowledge graphs for reasoning. Our approach achieves significant performance boosts over pretrained language models and vanilla knowledge models, all while providing interpretable reasoning paths for its predictions.
翻译:理解叙事要求解释与文字描述的情况的原因、影响和状态有关的隐含世界知识。 这一挑战的核心是如何根据需求和理性获取与背景相关的知识。 在本文中,我们提出对零光常识问题的初始研究,办法是将这项任务作为动态生成的常识知识图的推论来拟订,作为动态生成的常识知识图的推理。 与以前依靠从静态知识图中检索现有知识的知识而进行的知识整合研究相比,我们的研究需要常识知识整合,因为现有知识库中往往没有与背景相关的知识。 因此,我们提出一种新颖的方法,利用基因神经常识知识模型生成与背景相关的象征性知识结构,用基因神经常识知识模型生成需求。 两套数据集的经验性结果显示了我们动态构建知识图的神经-精神方法的功效。我们的方法在经过预先培训的语言模型和香草知识模型之后取得了显著的绩效提升,所有方法都为预测提供了可解释的推理路径。