Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity. For a certain class of NLI problems where the resulting entailment label depends only on the context monotonicity and the relation between the substituted concepts, we build on previous techniques that aim to improve the performance of NLI models for these problems, as consistent performance across both upward and downward monotone contexts still seems difficult to attain even for state-of-the-art models. To this end, we reframe the problem of context monotonicity classification to make it compatible with transformer-based pre-trained NLI models and add this task to the training pipeline. Furthermore, we introduce a sound and complete simplified monotonicity logic formalism which describes our treatment of contexts as abstract units. Using the notions in our formalism, we adapt targeted challenge sets to investigate whether an intermediate context monotonicity classification task can aid NLI models' performance on examples exhibiting monotonicity reasoning.
翻译:自然语言环境在相关概念的替代方面表现出逻辑规律性:这些功能性理论属性被记录在称为单调的功能性单一属性中。对于某些一类NLI问题,其产生的随附性标签仅取决于上下文的单一性以及替代概念之间的关系,我们以先前的技术为基础,目的是改进NLI模型在这些问题上的性能,因为即使在最先进的模型中,单调和下调的一致性能似乎也难以达到。为此,我们重新界定了上下调两种情况。我们重新界定了背景单一性分类的问题,使之与基于变压器的预先训练的NLI模型兼容,并将这一任务添加到培训管道中。此外,我们引入了一种健全和完全简化的单一性逻辑形式主义,将我们的环境描述为抽象单元。我们使用形式主义中的概念,我们调整了有针对性的挑战组合,以调查中间环境单一性分类任务是否有助于国家LI模型在展示单一性推理的实例上的表现。