In this paper, we propose LaPraDoR, a pretrained dual-tower dense retriever that does not require any supervised data for training. Specifically, we first present Iterative Contrastive Learning (ICoL) that iteratively trains the query and document encoders with a cache mechanism. ICoL not only enlarges the number of negative instances but also keeps representations of cached examples in the same hidden space. We then propose Lexicon-Enhanced Dense Retrieval (LEDR) as a simple yet effective way to enhance dense retrieval with lexical matching. We evaluate LaPraDoR on the recently proposed BEIR benchmark, including 18 datasets of 9 zero-shot text retrieval tasks. Experimental results show that LaPraDoR achieves state-of-the-art performance compared with supervised dense retrieval models, and further analysis reveals the effectiveness of our training strategy and objectives. Compared to re-ranking, our lexicon-enhanced approach can be run in milliseconds (22.5x faster) while achieving superior performance.
翻译:在本文中,我们提出LaPraDoR,这是一个经过预先训练的双塔密集检索器,不需要任何监督的培训数据。具体地说,我们首先介绍迭接对查询器和文档编码器进行缓存机制的迭代培训的循环竞争学习(ICoL),ICoL不仅扩大了负数,而且还在同一隐蔽空间内保留缓存示例的表示方式。我们然后提出Lexicon-Enhanced Dense Retreival(LEDR),作为提高与词汇匹配的密集检索的简单而有效的方法。我们根据最近提出的BEIR基准对LaPraDoR进行了评估,其中包括18套9个零发文本检索任务的数据。实验结果表明,LaPraDoR与受监督的密集检索模型相比,取得了最新的最新业绩,进一步的分析揭示了我们培训战略和目标的有效性。与重新排序相比,我们的词汇强化方法可以以毫秒(22.5x)的速度运行,同时实现更高的业绩。