Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relies on representation learning to embed documents and queries into a common semantic encoding space. According to the encoding space, recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms. These two paradigms unveil the PLMs' representation capability in different granularities, i.e., global sequence-level compression and local word-level contexts, respectively. Inspired by their complementary global-local contextualization and distinct representing views, we propose a new learning framework, UnifieR, which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability. Experiments on passage retrieval benchmarks verify its effectiveness in both paradigms. A uni-retrieval scheme is further presented with even better retrieval quality. We lastly evaluate the model on BEIR benchmark to verify its transferability.
翻译:大规模检索是指从一个庞大的收藏中召回相关文件。 它依赖于演示学习, 将文件和查询嵌入一个共同的语义编码空间。 根据编码空间, 以经过预先训练的语言模型(PLM)为基础的最近检索方法可以粗略地分类为密集的矢量模型或基于词汇的模型。 这两个范例分别揭示了PLM在不同的微粒上的代表性能力, 即全球序列级压缩和地方单词级背景上的代表性能力。 在它们相辅相成的全球- 本地背景化和不同代表观点的启发下, 我们提议了一个新的学习框架 UnifieR, 它将密集的矢量和基于词汇的检索统一成一个模型, 具有双重代表能力。 关于通过检索基准的实验在两种模型中都证实了其有效性。 一个单项检索计划进一步展示了更好的检索质量。 我们最后评估了BIR基准的模型,以核实其可转移性。