The abductive natural language inference task ($\alpha$NLI) is proposed to infer the most plausible explanation between the cause and the event. In the $\alpha$NLI task, two observations are given, and the most plausible hypothesis is asked to pick out from the candidates. Existing methods model the relation between each candidate hypothesis separately and penalize the inference network uniformly. In this paper, we argue that it is unnecessary to distinguish the reasoning abilities among correct hypotheses; and similarly, all wrong hypotheses contribute the same when explaining the reasons of the observations. Therefore, we propose to group instead of ranking the hypotheses and design a structural loss called ``joint softmax focal loss'' in this paper. Based on the observation that the hypotheses are generally semantically related, we have designed a novel interactive language model aiming at exploiting the rich interaction among competing hypotheses. We name this new model for $\alpha$NLI: Interactive Model with Structural Loss (IMSL). The experimental results show that our IMSL has achieved the highest performance on the RoBERTa-large pretrained model, with ACC and AUC results increased by about 1\% and 5\% respectively.
翻译:绑架性自然语言推断任务( ALpha$NLI) 旨在推断原因与事件之间最可信的解释。 在 $\ alpha$NLI 任务中, 给出了两种意见, 并请求从候选人中挑选最可信的假设。 现有方法分别为每个候选人假设之间的关系建模, 并统一惩罚推论网络。 在本文中, 我们提出, 没有必要区分正确的假设之间的推理能力; 同样, 在解释观察理由时, 所有错误的假设都会做出同样的贡献 。 因此, 我们提议对假设进行分组, 并设计一个结构性损失, 称为“ 联合软式软式焦点损失 ” 。 基于这些假设通常具有语义相关性的观察, 我们设计了一个新型互动语言模型, 目的是利用相互竞争的假设之间的丰富互动。 我们为 $\ alpha$ NLI: 与结构损失互动模型( IMSL ) 命名这个新模型。 实验结果显示, 我们IMSL 在RoBERTA大前型模型上取得了最高性表现, ACC & 5 和AUCL 分别提高了结果。