Logical reasoning, which is closely related to human cognition, is of vital importance in human's understanding of texts. Recent years have witnessed increasing attentions on machine's logical reasoning abilities. However, previous studies commonly apply ad-hoc methods to model pre-defined relation patterns, such as linking named entities, which only considers global knowledge components that are related to commonsense, without local perception of complete facts or events. Such methodology is obviously insufficient to deal with complicated logical structures. Therefore, we argue that the natural logic units would be the group of backbone constituents of the sentence such as the subject-verb-object formed "facts", covering both global and local knowledge pieces that are necessary as the basis for logical reasoning. Beyond building the ad-hoc graphs, we propose a more general and convenient fact-driven approach to construct a supergraph on top of our newly defined fact units, and enhance the supergraph with further explicit guidance of local question and option interactions. Experiments on two challenging logical reasoning benchmark datasets, ReClor and LogiQA, show that our proposed model, \textsc{Focal Reasoner}, outperforms the baseline models dramatically. It can also be smoothly applied to other downstream tasks such as MuTual, a dialogue reasoning dataset, achieving competitive results.
翻译:逻辑推理与人类认知密切相关,对于人类理解文本至关重要。近年来,人们日益关注机器逻辑推理能力。然而,以往的研究通常采用特别方法来模拟预先确定的关系模式,例如将名称实体联系起来,这些实体只考虑与常识有关的全球知识组成部分,而没有当地对完整事实或事件的看法。这种方法显然不足以处理复杂的逻辑结构。因此,我们认为,自然逻辑单位将是该句的主力组成部分,如主题-动词形成的“事实”,涵盖全球和地方知识部分,这是逻辑推理的基础。除了建立自动推理图外,我们建议一种更一般和方便的事实驱动方法,在我们新定义的事实单位之上建立一个超级图,用对当地问题和选择互动的进一步明确指导来强化超级绘图。关于两个具有挑战性的逻辑推理基准数据集的实验,即ReClor和LogiQA,表明我们提议的模型、textsc{Focalcalogislates, 也能够将它应用到另一个具有高度竞争力的下游推理模型。