This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.
翻译:本文展示了SimCSE, 这是一个简单的对比式学习框架, 大大推进了最先进的句子嵌入。 我们首先描述一种不受监督的方法, 它将输入句子, 并预测自己, 其目标有对比性, 只有标准的辍学使用为噪音。 这个简单的方法效果惊人, 与前受监督的对应方相比, 表现得非常出色。 我们发现, 辍学作为最小的数据增强作用, 并消除它导致表达性崩溃。 然后, 我们提出一种监督的方法, 将自然语言推断数据中的附加说明的对子纳入我们的对比性学习框架, 使用“ 配对” 作为正数, “ 连接” 配对作为硬性负数。 我们评估了标准语义相似性( STS) 的SimCSEE, 标准语义相似性( ST) 任务, 以及我们使用 BERT 基础的未经监督和监督的模式, 分别达到76. 3% 和81.6% Spearman 的相互关系, 与先前的最佳结果相比, 4.2% 和 2.2% 改进。 我们还显示 -- -- 理论和实证性 -- -- 对比式学习目标在监督的定位统一时, 有更好的组合空间时, 更加一致。