Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search. However, these procedures are often misaligned in that training objectives do not exactly reflect the retrieval scenario at inference time. In this paper, we explore how the gap between training and inference in dense retrieval can be reduced, focusing on dense phrase retrieval (Lee et al., 2021) where billions of representations are indexed at inference. Since validating every dense retriever with a large-scale index is practically infeasible, we propose an efficient way of validating dense retrievers using a small subset of the entire corpus. This allows us to validate various training strategies including unifying contrastive loss terms and using hard negatives for phrase retrieval, which largely reduces the training-inference discrepancy. As a result, we improve top-1 phrase retrieval accuracy by 2~3 points and top-20 passage retrieval accuracy by 2~4 points for open-domain question answering. Our work urges modeling dense retrievers with careful consideration of training and inference via efficient validation while advancing phrase retrieval as a general solution for dense retrieval.
翻译:建筑密集的检索器需要一系列标准程序,包括培训和验证神经模型,并为高效搜索创建索引。然而,这些程序往往不正确,因为培训目标并不确切反映推断时间的检索情景。在本文中,我们探讨如何缩小密集检索中的培训与推断之间的差距,重点是密集的短语检索(Lee等人,2021年),其中数十亿个表达方式在推断时进行了索引化。由于用大型指数验证每个密度的检索器,实际上不可行,我们建议了一种有效的方法,用整个体中的一小部分来验证密集检索器。这使我们能够验证各种培训战略,包括统一对比损失术语,以及用硬负值来计算短语检索,这在很大程度上缩小了培训-推断的差异。结果,我们用2~3个点和上20个通道的检索精度提高2~4个点,用于开放式问题回答。我们的工作敦促对密集检索器进行模型化,仔细考虑培训,并通过高效率的验证,同时推进短语检索作为密集检索的一般解决办法。