Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer the relationship between the sentence pair (premise and hypothesis). Many recent works have used contrastive learning by incorporating the relationship of the sentence pair from NLI datasets to learn sentence representation. However, these methods only focus on comparisons with sentence-level representations. In this paper, we propose a Pair-level Supervised Contrastive Learning approach (PairSCL). We adopt a cross attention module to learn the joint representations of the sentence pairs. A contrastive learning objective is designed to distinguish the varied classes of sentence pairs by pulling those in one class together and pushing apart the pairs in other classes. We evaluate PairSCL on two public datasets of NLI where the accuracy of PairSCL outperforms other methods by 2.1% on average. Furthermore, our method outperforms the previous state-of-the-art method on seven transfer tasks of text classification.
翻译:自然语言推断(NLI)是自然语言理解方面一项日益重要的任务,它要求人们推断对词(假设和假设)之间的关系。许多最近的工作都采用了对比性学习方法,把对词从NLI的数据集纳入对词之间的关系,以学习句的表述。然而,这些方法只侧重于与判决级别的表述进行比较。在本文件中,我们建议采用对等级别监督对立学习方法(PairSCL)。我们采用了一个交叉关注模块,学习对词的联合表述。一个对比性学习目标旨在通过将对词放在一个班级,将对子挤开来区分对词的不同类别。我们评估了NLI的两个公开数据集的PairSCL, 其中PairSCL的准确性比其他方法平均高出2.1%。此外,我们的方法比文本分类的七个传输任务比以往的先进方法要差得多。