To better support retrieval applications such as web search and question answering, growing effort is made to develop retrieval-oriented language models. Most of the existing works focus on improving the semantic representation capability for the contextualized embedding of [CLS] token. However, recent study shows that the ordinary tokens besides [CLS] may provide extra information, which helps to produce a better representation effect. As such, it's necessary to extend the current methods where all contextualized embeddings can be jointly pre-trained for the retrieval tasks. With this motivation, we propose a new pre-training method: duplex masked auto-encoder, a.k.a. DupMAE, which targets on improving the semantic representation capacity for the contextualized embeddings of both [CLS] and ordinary tokens. It introduces two decoding tasks: one is to reconstruct the original input sentence based on the [CLS] embedding, the other one is to minimize the bag-of-words loss (BoW) about the input sentence based on the entire ordinary tokens' embeddings. The two decoding losses are added up to train a unified encoding model. The embeddings from [CLS] and ordinary tokens, after dimension reduction and aggregation, are concatenated as one unified semantic representation for the input. DupMAE is simple but empirically competitive: with a small decoding cost, it substantially contributes to the model's representation capability and transferability, where remarkable improvements are achieved on MS MARCO and BEIR benchmarks.
翻译:为了更好地支持网络搜索和回答问题等检索应用程序,正在加紧努力开发检索导向语言模型。大多数现有工作侧重于改进[CLS]符号背景嵌入的语义表达能力。然而,最近的研究表明,除[CLS]外的普通象征物可能提供额外信息,从而有助于产生更好的表达效果。因此,有必要扩大当前的方法,使所有背景化嵌入都能够共同为检索任务接受预先培训。有了这一动机,我们提出了一个新的培训前方法:双倍遮盖式自动编码器, a.k.a.dupMAE,目标是改进以背景化嵌入[CLS]和普通符号背景化的语义表达能力。它提出了两个解码任务:一个是重建以[CLS]嵌入为基础的原始输入句,另一个是尽量减少基于背景化的词汇包损失,关于基于整个普通符号嵌入的输入基准(BoW),我们提出了新的培训方法:两种解码化成本化损失是用来将SLSLS缩入,一个是将SBIA升级成一个。