Text embedding models are designed for sentence-level applications like retrieval and semantic similarity, and are primarily evaluated on sentence-level benchmarks. Their behavior on isolated words is less understood. We show that simply prepending semantic prompts to words before embedding substantially improves word similarity correlations. Testing 7 text embedding models, including text-embedding-3-large (OpenAI), embed-english-v3.0 (Cohere), voyage-3(Voyage AI), all-mpnet-base-v2, and Qwen3-Embedding-8B, on 3 standard benchmarks (SimLex-999, WordSim-353, MEN-3000), we find that prompts like "meaning: {word}" or "Represent the semantic concept: {word}" improve Spearman correlations by up to +0.29 on SimLex-999. Some models fail completely on bare words (correlation = 0) but recover with prompts (+0.73 improvement). Our best results achieve correlation = 0.692 on SimLex-999 with embed-english-v3.0 (Cohere), correlation = 0.811 on WordSim-353, and correlation = 0.855 on MEN-3000 with text-embedding-3-large (OpenAI). These results outperform classic static embeddings like Word2Vec (correlation = 0.40) and even the best static method LexVec (correlation = 0.48) on SimLex-999, establishing a new state-of-the-art for pure embedding methods. This zero-shot technique requires no training and works with any text embedding model.
翻译:文本嵌入模型通常为句子级应用(如检索和语义相似度)设计,并主要在句子级基准测试中评估。它们对孤立词汇的处理机制尚不明确。本文研究表明,在嵌入前为词汇添加语义提示能显著提升词相似度相关性。我们测试了7种文本嵌入模型,包括text-embedding-3-large(OpenAI)、embed-english-v3.0(Cohere)、voyage-3(Voyage AI)、all-mpnet-base-v2和Qwen3-Embedding-8B,在3个标准基准(SimLex-999、WordSim-353、MEN-3000)上发现,使用“含义:{词}”或“表征语义概念:{词}”等提示可将SimLex-999的斯皮尔曼相关性最高提升+0.29。部分模型对原始词汇完全失效(相关性=0),但通过提示可恢复性能(最高提升+0.73)。我们的最佳结果包括:embed-english-v3.0(Cohere)在SimLex-999上达到相关性=0.692,text-embedding-3-large(OpenAI)在WordSim-353上达到相关性=0.811,在MEN-3000上达到相关性=0.855。这些结果超越了经典静态嵌入方法如Word2Vec(相关性=0.40)及最优静态方法LexVec(相关性=0.48),在SimLex-999上为纯嵌入方法确立了新的技术标杆。该零样本技术无需训练,适用于任何文本嵌入模型。