Most of the contemporary approaches for multi-hop Natural Language Inference (NLI) construct explanations considering each test case in isolation. However, this paradigm is known to suffer from semantic drift, a phenomenon that causes the construction of spurious explanations leading to wrong conclusions. In contrast, this paper proposes an abductive framework for multi-hop NLI exploring the retrieve-reuse-refine paradigm in Case-Based Reasoning (CBR). Specifically, we present Case-Based Abductive Natural Language Inference (CB-ANLI), a model that addresses unseen inference problems by analogical transfer of prior explanations from similar examples. We empirically evaluate the abductive framework on commonsense and scientific question answering tasks, demonstrating that CB-ANLI can be effectively integrated with sparse and dense pre-trained encoders to improve multi-hop inference, or adopted as an evidence retriever for Transformers. Moreover, an empirical analysis of semantic drift reveals that the CBR paradigm boosts the quality of the most challenging explanations, a feature that has a direct impact on robustness and accuracy in downstream inference tasks.
翻译:现代多种语言自然语言推断法(NLI)的多数当代方法都是考虑到每个测试案例的孤立性解释。然而,这一范式已知会受到语义漂移的影响,这种现象导致造就虚假的解释,从而得出错误的结论。与此相反,本文件建议为多民族语言自然语言推断法(NLI)探索基于案例的理由(CBR)中的检索-再使用-再用-再定义范式(CBR)建立一个诱拐性框架。具体地说,我们介绍基于案例的直观自然语言推断(CB-ANLI)(CBR)模型,该模型通过模拟转移类似例子的先前解释,解决了不可见的推论问题。我们实证地评估了常见和科学问题解答任务的绑架性框架,表明CB-ANLI可以有效地与稀疏和密集的事先训练的诱导者结合,以改进多动误判,或作为变压者的证据检索器。此外,对语义流流的经验分析表明,CBR范式模型提高了最具挑战性的解释的质量,对下游引力任务的稳健和准确性有直接影响。