Combining deep learning with symbolic logic reasoning aims to capitalize on the success of both fields and is drawing increasing attention. Inspired by DeepLogic, an end-to-end model trained to perform inference on logic programs, we introduce IMA-GloVe-GA, an iterative neural inference network for multi-step reasoning expressed in natural language. In our model, reasoning is performed using an iterative memory neural network based on RNN with a gate attention mechanism. We evaluate IMA-GloVe-GA on three datasets: PARARULES, CONCEPTRULES V1 and CONCEPTRULES V2. Experimental results show DeepLogic with gate attention can achieve higher test accuracy than DeepLogic and other RNN baseline models. Our model achieves better out-of-distribution generalisation than RoBERTa-Large when the rules have been shuffled. Furthermore, to address the issue of unbalanced distribution of reasoning depths in the current multi-step reasoning datasets, we develop PARARULE-Plus, a large dataset with more examples that require deeper reasoning steps. Experimental results show that the addition of PARARULE-Plus can increase the model's performance on examples requiring deeper reasoning depths. The source code and data are available at https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language.
翻译:深层学习与象征性逻辑推理相结合的目的是利用这两个领域的成功,并正在引起越来越多的关注。在DeepLogic的启发下,我们引入了IMA-GloVe-GA,这是一个以自然语言表达的多步推理的迭接神经推论网络。在我们的模式中,在规则被打乱时,使用基于RNNN的迭接记忆神经网络进行推理。我们在三个数据集上对IMA-GloVe-GA进行了评估:ParaRULES、CONCTRULES V1和CONCEPTRULULES V2. 实验结果显示深层LOgic能比DeepLogic和其他RNNN基线模型更具有测试性精度。我们的模型比规则被打碎裂时的ROBERTA-Large的分布范围要好得多。此外,为了解决当前多步推理数据集中推理深度分布不均匀的问题,我们开发了PRARULE-PLus,一个大型数据集集,需要更深入推理的深度步骤步骤。 实验性结果显示需要更深的源,需要更深的LARUSLARULS的推理学。