Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive NLI task -- which consists in choosing the more likely explanation for given observations. We train a specialized language model LMI that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypotheses -- events generated by LMI -- and b) selecting the one that is most similar to the observed outcome. We show that our MTL model improves over prior vanilla pre-trained LMs fine-tuned on Abductive NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.
翻译:指向性推理从一些观察开始,目的是为这些观察找到最可信的解释。为了实施绑架,人类常常利用时间和因果推论,并了解某些假设情况如何会产生不同的结果。这项工作首次研究了这种知识如何影响Abdusing NLI的任务 -- -- 包括选择对特定观察的更可能的解释。我们训练了一个专门的语言模型LMI,任务是从从从某个特定事件演变的假设情景中产生接下来可能发生的情况。我们然后提出一个多任务MTL模型,以解决Abdusing NLI的任务,其中预测了一种合理的解释,即(a)考虑到从候选假设中产生的不同事件 -- -- LMI产生的事件 -- -- 和(b)选择与观察到的结果最相似的事件。我们显示我们的MTL模型比以前Villa 预先训练的LMS 微调了对Abdusing NLILI的VA改进。我们的人工评估和分析表明,从不同的假设情景中了解可能发生的下一个事件有助于诱拐性推断。