Lexical inference in context (LIiC) is the task of recognizing textual entailment between two very similar sentences, i.e., sentences that only differ in one expression. It can therefore be seen as a variant of the natural language inference task that is focused on lexical semantics. We formulate and evaluate the first approaches based on pretrained language models (LMs) for this task: (i) a few-shot NLI classifier, (ii) a relation induction approach based on handcrafted patterns expressing the semantics of lexical inference, and (iii) a variant of (ii) with patterns that were automatically extracted from a corpus. All our approaches outperform the previous state of the art, showing the potential of pretrained LMs for LIiC. In an extensive analysis, we investigate factors of success and failure of our three approaches.
翻译:上下文(LIIC)的法理推理是承认两个非常相似的句子(即只用一种表达方式表示的句子)之间的文字含义的任务,因此,可以把它视为以法律语义为重点的自然语言推理任务的变体,我们根据预先培训的语言模型制定和评价了这项任务的第一种方法:(一) 几发NLI分类器,(二) 以手制模式为基础的关系感应法,表达法理推理的语义,(三) 一种(二) 模式的变体,自动从一个材料中提取,我们的所有方法都与以往的艺术状态不同,显示为LIC预先培训的LMS的潜力。 我们通过广泛分析,调查了我们三种方法的成败因素。