Text embedding models play a cornerstone role in AI applications, such as retrieval-augmented generation (RAG). While general-purpose text embedding models demonstrate strong performance on generic retrieval benchmarks, their effectiveness diminishes when applied to private datasets (e.g., company-specific proprietary data), which often contain specialized terminology and lingo. In this work, we introduce BMEmbed, a novel method for adapting general-purpose text embedding models to private datasets. By leveraging the well-established keyword-based retrieval technique (BM25), we construct supervisory signals from the ranking of keyword-based retrieval results to facilitate model adaptation. We evaluate BMEmbed across a range of domains, datasets, and models, showing consistent improvements in retrieval performance. Moreover, we provide empirical insights into how BM25-based signals contribute to improving embeddings by fostering alignment and uniformity, highlighting the value of this approach in adapting models to domain-specific data. We release the source code available at https://github.com/BaileyWei/BMEmbed for the research community.
翻译:文本嵌入模型在检索增强生成(RAG)等人工智能应用中发挥着基石作用。尽管通用文本嵌入模型在通用检索基准上表现出色,但当应用于私有数据集(例如公司特有的专有数据)时,其效果往往会下降,因为这些数据通常包含专业术语和特定行话。本文提出了一种新颖的方法——BMEmbed,用于将通用文本嵌入模型适配到私有数据集。该方法通过利用成熟的关键词检索技术(BM25),基于关键词检索结果的排序构建监督信号,以促进模型适配。我们在多个领域、数据集和模型上评估了BMEmbed,结果显示其在检索性能上取得了持续提升。此外,我们通过实证分析揭示了基于BM25的信号如何通过促进对齐性和一致性来改进嵌入表示,从而凸显了该方法在使模型适应特定领域数据方面的价值。我们已向研究社区开源代码,地址为 https://github.com/BaileyWei/BMEmbed。