To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms the former, it requires great human effort to generate suitable datasets of sufficient size. In this paper, we show how PLMs can be leveraged to obtain high-quality sentence embeddings without the need for labeled data, finetuning or modifications to the pretraining objective: We utilize the generative abilities of large and high-performing PLMs to generate entire datasets of labeled text pairs from scratch, which we then use for finetuning much smaller and more efficient models. Our fully unsupervised approach outperforms strong baselines on several semantic textual similarity datasets.
翻译:为了从经过培训的语文模式中获得高质量的句子嵌入,必须增加培训前目标,或者对一大批贴标签的文本配对进行微调。虽然后一种方法通常优于前者,但需要人作出巨大的努力才能产生足够大小的适当数据集。在本文中,我们展示如何利用PLM来获得高质量的句子嵌入,而不需要贴标签的数据、微调或修改培训前目标:我们利用大型和高性能的PLM的基因能力从零开始产生全套标签的文本配对数据集,然后用于微调更小、更高效的模型。我们完全未经监督的方法在几个语义相似的文本数据集上都比得更强的基线。