This paper presents SimCSE, a simple contrastive learning framework that greatly advances the state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We hypothesize that dropout acts as minimal data augmentation and removing it leads to a representation collapse. Then, we incorporate annotated pairs from natural language inference datasets into our contrastive learning framework, by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT-base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2 and 2.2 points improvement compared to previous best results. We also show -- both theoretically and empirically -- that contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.
翻译:本文展示了SimCSE, 一个简单的对比式学习框架, 大大推进了最先进的句子嵌入。 我们首先描述一种不受监督的方法, 它将输入句子, 并预测自己, 其目标与众不同, 只使用标准辍学率作为噪音。 这个简单的方法效果惊人, 与前受监督的对应方相比, 表现得令人惊讶。 我们假设, 辍学作为最小的数据增强作用, 并消除它导致表达性崩溃。 然后, 我们把自然语言推断数据集的附加说明的对子纳入我们的对比式学习框架, 使用“ 成品” 配对作为正数, 将“ 贸易” 配对作为硬负数。 我们评估标准语义相似( STS) 任务中的SimCSEE, 以及我们使用 BERT 基础的未经监督和监督的模式, 分别达到76.3% 和 81.6% Spearman 的相互关系, 与以往最佳结果相比, 4.2 和 2.2 改进了。 我们还显示 -- 理论上和实验上 -- -- 对比式学习目标都规范了事先经过训练的嵌嵌入式的嵌式嵌入空间, 当它能够更统一的时候, 更加统一的时候, 。