This paper presents SimCSE, a simple contrastive learning framework that greatly advances the state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We hypothesize that dropout acts as minimal data augmentation and removing it leads to a representation collapse. Then, we draw inspiration from the recent success of learning sentence embeddings from natural language inference (NLI) datasets and incorporate annotated pairs from NLI datasets into contrastive learning by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT-base achieve an average of 74.5% and 81.6% Spearman's correlation respectively, a 7.9 and 4.6 points improvement compared to previous best results. We also show that contrastive learning theoretically regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.
翻译:本文展示了SimCSE, 这是一个简单的对比式学习框架, 大大推进了最先进的句子嵌入。 我们首先描述一种不受监督的方法, 采用输入句, 并预测自己, 其目标与众不同, 只有标准辍学使用为噪音。 这个简单的方法效果惊人, 与前受监督的对应方相同。 我们假设, 辍学作为最小的数据增强作用, 并消除它导致表达式崩溃。 然后, 我们从最近学习的句子嵌入自然语言推论( NLI) 数据集的成功中得到启发, 并将NLI数据集中的附加说明的对子纳入对比式学习中, 使用“ 配对” 来做正对, “ 连接” 配对作为硬负对。 我们评估SimCSEE 的标准语义相似性任务, 以及我们使用 BERT- base 的不受监督和监督的模型, 分别达到74.5%和81.6 %, 与以往的最佳结果相比, 改进了7.9 和 4.6 。 我们还显示, 对比性学习的理论学说来, 校准前和经监管的校准的校对等信号更加稳定。