GPT transformers are the largest language models available, yet semantic search is dominated by BERT transformers. We present SGPT-BE and SGPT-CE for applying GPT models as Bi-Encoders or Cross-Encoders to symmetric or asymmetric search. SGPT-BE produces semantically meaningful sentence embeddings by contrastive fine-tuning of only bias tensors and a novel pooling method. A 5.8 billion parameter SGPT-BE outperforms the best available sentence embeddings by 6% setting a new state-of-the-art on BEIR. It outperforms the concurrently proposed OpenAI Embeddings of the 175B Davinci endpoint, which fine-tunes 250,000 times more parameters. SGPT-CE uses log probabilities from GPT models without any fine-tuning. A 6.1 billion parameter SGPT-CE sets an unsupervised state-of-the-art on BEIR. It beats the supervised state-of-the-art on 7 datasets, but significantly loses on other datasets. We show how this can be alleviated by adapting the prompt. SGPT-BE and SGPT-CE performance scales with model size. Yet, increased latency, storage and compute costs should be considered. Code, models and result files are freely available at https://github.com/Muennighoff/sgpt.
翻译:GPT变压器是现有最大的语言模型,但语义搜索则由BERT变压器主导。我们提供SGPT-BE和SGPT-CE,用于将GPT模型作为Bi-Encoders或Cross-Eccoders用于对称搜索或不对称搜索。SGPT-BE 生成具有词义意义的句,通过对比性微调,只对偏差色粒子和新颖的集合方法,嵌入SBERT-BE的参数为580亿个参数,比现有的最佳刑罚嵌入率高6 %,在BEIR上设置了一个新的状态。它优于拟议的 OpenAI 175B Davinci端点的嵌入模式,而该模式的精细度则高出25万倍。SGPT-CE 使用GPT模型的逻辑概率,而不作任何微调。SG-CE SG 设置一个不受监督的模型/BE。它比受监督的状态在7个数据集设置上设置新的状态,但大大损失。我们可以快速调整。在SG-SG 的存储中,我们展示了这种操作,可以迅速调整。