This paper describes Difference-aware Deep continuous prompt for Contrastive Sentence Embeddings (D2CSE) that learns sentence embeddings. Compared to state-of-the-art approaches, D2CSE computes sentence vectors that are exceptional to distinguish a subtle difference in similar sentences by employing a simple neural architecture for continuous prompts. Unlike existing architectures that require multiple pretrained language models (PLMs) to process a pair of the original and corrupted (subtly modified) sentences, D2CSE avoids cumbersome fine-tuning of multiple PLMs by only optimizing continuous prompts by performing multiple tasks -- i.e., contrastive learning and conditional replaced token detection all done in a self-guided manner. D2CSE overloads a single PLM on continuous prompts and greatly saves memory consumption as a result. The number of training parameters in D2CSE is reduced to about 1\% of existing approaches while substantially improving the quality of sentence embeddings. We evaluate D2CSE on seven Semantic Textual Similarity (STS) benchmarks, using three different metrics, namely, Spearman's rank correlation, recall@K for a retrieval task, and the anisotropy of an embedding space measured in alignment and uniformity. Our empirical results suggest that shallow (not too meticulously devised) continuous prompts can be honed effectively for multiple NLP tasks and lead to improvements upon existing state-of-the-art approaches.
翻译:本文提出了基于差异感知的深度连续提示的对比句子嵌入(D2CSE),用于学习句子嵌入。相比当前最先进的方法,D2CSE使用一种简单的神经架构,计算出对于区分相似句子的微小差异而言的句子向量。与需要在多个预训练语言模型(PLM)上处理原始句子对和已修复(微调)的多个现有架构不同,D2CSE通过对连续提示进行多个任务的优化——即对比学习和条件替换标记检测,以自导向的方式避免了多个PLM的繁琐微调。D2CSE在单个PLM上重载了一个连续的提示,大大节省了内存消耗。D2CSE的训练参数数量减少到现有方法的约1%,而句子嵌入的质量大大提高。我们使用三种不同的指标(即Spearman等级相关性、召回率@K来进行检索任务以及嵌入空间中的各向异性,通过七个语义文本相似性(STS)基准对D2CSE进行验证。我们的实证结果表明,浅层(不太仔细设定)的连续提示可以有效地用于多个自然语言处理任务,在现有最先进的方法上取得了进步。