Natural language inference (NLI) aims to determine the logical relationship between two sentences, such as Entailment, Contradiction, and Neutral. In recent years, deep learning models have become a prevailing approach to NLI, but they lack interpretability and explainability. In this work, we address the explainability of NLI by weakly supervised logical reasoning, and propose an Explainable Phrasal Reasoning (EPR) approach. Our model first detects phrases as the semantic unit and aligns corresponding phrases in the two sentences. Then, the model predicts the NLI label for the aligned phrases, and induces the sentence label by fuzzy logic formulas. Our EPR is almost everywhere differentiable and thus the system can be trained end to end. In this way, we are able to provide explicit explanations of phrasal logical relationships in a weakly supervised manner. We further show that such reasoning results help textual explanation generation.
翻译:自然语言推论( NLI) 旨在确定两个句子之间的逻辑关系。 比如, 细节、 矛盾 和 中立 。 近年来, 深层次的学习模式已成为对 NLI 的主导方法, 但缺乏可解释性和可解释性。 在这项工作中, 我们通过监管不力的逻辑推理来解决NLI 的解释性, 并提议一个可解释的理论解释性方法。 我们的模型首先检测出作为语义单位的词句, 并对两句中相应的词句进行调和。 然后, 模型预测了对齐的词句子的NLI 标签, 并用模糊的逻辑公式引出句子标签。 我们的 EPR几乎遍布各地, 因而可以培训系统结束。 这样, 我们就能以监管不力的方式对语法逻辑关系做出明确解释。 我们进一步显示这些推论结果有助于文字解释的生成 。