Dense retrieval (DR) has shown promising results in information retrieval. In essence, DR requires high-quality text representations to support effective search in the representation space. Recent studies have shown that pre-trained autoencoder-based language models with a weak decoder can provide high-quality text representations, boosting the effectiveness and few-shot ability of DR models. However, even a weak autoregressive decoder has the bypass effect on the encoder. More importantly, the discriminative ability of learned representations may be limited since each token is treated equally important in decoding the input texts. To address the above problems, in this paper, we propose a contrastive pre-training approach to learn a discriminative autoencoder with a lightweight multi-layer perception (MLP) decoder. The basic idea is to generate word distributions of input text in a non-autoregressive fashion and pull the word distributions of two masked versions of one text close while pushing away from others. We theoretically show that our contrastive strategy can suppress the common words and highlight the representative words in decoding, leading to discriminative representations. Empirical results show that our method can significantly outperform the state-of-the-art autoencoder-based language models and other pre-trained models for dense retrieval.
翻译:在信息检索方面,大量检索(DR)已经显示出了充满希望的结果。本质上,DR需要高质量的文本表达方式,以支持在代表空间的有效搜索。最近的研究显示,经过培训的基于自动编码器的语文模型,如果有弱的解码器,则能够提供高质量的文本表达方式,提高DR模型的有效性和几分能力。但是,即使一个弱的自动递进解码器也对编码器产生绕行效应。更重要的是,学习的表达方式的差别性能力可能受到限制,因为每个符号在解码输入文本时都受到同等重视。为了解决上述问题,在本文件中,我们提出了一种对比式的培训前方法,以学习一种带有轻量多层认知(MLP)解码器的歧视性自动编码器。基本想法是,以非向导式的方式生成输入文本的文字分布,并拉动两个隐藏式版本的文本的文字传播方式,同时将其他文本推开。我们从理论上表明,我们的对比式战略可以压制共同的词,突出代言,在解码式解码中突出代表的言语,导致高度的老化式的变式模型。