Textual logical reasoning, especially question-answering (QA) tasks with logical reasoning, requires awareness of particular logical structures. The passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence). However, such structures are unexplored as current QA systems focus on entity-based relations. In this work, we propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs). The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features. This pipeline is applied to a general encoder, whose fundamental features are joined with the high-level logic features for answer prediction. Experiments on three textual logical reasoning datasets demonstrate the reasonability of the logical structures built in DAGNs and the effectiveness of the learned logic features. Moreover, zero-shot transfer results show the features' generality to unseen logical texts.
翻译:文本逻辑推理,特别是具有逻辑推理的问答(QA)任务,需要意识到特定的逻辑结构。篇章级逻辑关系表示命题单元之间的蕴含或矛盾关系(例如,结论句)。然而,由于当前的 QA 系统集中于基于实体的关系,因此这些结构尚未被研究。在这项工作中,我们提出了逻辑结构约束建模,以解决逻辑推理 QA 并引入了话语感知图网络(DAGN)。该网络首先使用内联话语连接和通用逻辑理论构建逻辑图形,然后通过端到端演化逻辑关系以及更新图形特征的边理论机制来学习逻辑表示。将此流程应用于常规编码器,其基本特征与高级逻辑特征相结合进行答案预测。在三个文本逻辑推理数据集上进行的实验证明了 DAGN 中构建的逻辑结构的合理性以及已学习逻辑特征的有效性。此外,零-shot 转移结果表明这些特征对未见逻辑文本具有普适性。