Learning sentence embeddings in an unsupervised manner is fundamental in natural language processing. Recent common practice is to couple pre-trained language models with unsupervised contrastive learning, whose success relies on augmenting a sentence with a semantically-close positive instance to construct contrastive pairs. Nonetheless, existing approaches usually depend on a mono-augmenting strategy, which causes learning shortcuts towards the augmenting biases and thus corrupts the quality of sentence embeddings. A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field. As one answer, we propose a novel Peer-Contrastive Learning (PCL) with diverse augmentations. PCL constructs diverse contrastive positives and negatives at the group level for unsupervised sentence embeddings. PCL performs peer-positive contrast as well as peer-network cooperation, which offers an inherent anti-bias ability and an effective way to learn from diverse augmentations. Experiments on STS benchmarks verify the effectiveness of PCL against its competitors in unsupervised sentence embeddings.
翻译:在自然语言处理中,以不受监督的方式嵌入学习的句子至关重要。最近的常见做法是将经过预先训练的语言模式与未经监督的对比性学习相结合,其成功与否取决于如何从不同的正数中不受监督地学习,而其成功与否则取决于增加一个带有语义上的近似正面实例的句子,以构建对比性对配。然而,现有的方法通常依赖于一个单一强化战略,从而导致学习向强化偏差倾斜的捷径,从而腐蚀了判决嵌入的质量。一个直接的解决办法是从多重强化战略中采用更加多样的正数,而一个未决问题仍然是如何从不同的正数中不受监督地学习,而文本字段的增量质量则不均衡。作为一个答案,我们建议采用一个新的具有不同增量的同侪学习(PCL)新颖的版本。PCL在组一级构建不同的对比性正数和负数,在未受监督的句子嵌入中,PCL进行同侪反比,以及同侪网络合作,这提供了内在的反偏见能力,并提供了从不同的增缩中学习的有效方法。