The text retrieval task is mainly performed in two ways: the bi-encoder approach and the generative approach. The bi-encoder approach maps the document and query embeddings to common vector space and performs a nearest neighbor search. It stably shows high performance and efficiency across different domains but has an embedding space bottleneck as it interacts in L2 or inner product space. The generative retrieval model retrieves by generating a target sequence and overcomes the embedding space bottleneck by interacting in the parametric space. However, it fails to retrieve the information it has not seen during the training process as it depends solely on the information encoded in its own model parameters. To leverage the advantages of both approaches, we propose Contextualized Generative Retrieval model, which uses contextualized embeddings (output embeddings of a language model encoder) as vocab embeddings at the decoding step of generative retrieval. The model uses information encoded in both the non-parametric space of contextualized token embeddings and the parametric space of the generative retrieval model. Our approach of generative retrieval with contextualized vocab embeddings shows higher performance than generative retrieval with only vanilla vocab embeddings in the document retrieval task, an average of 6% higher performance in KILT (NQ, TQA) and 2X higher in NQ-320k, suggesting the benefits of using contextualized embedding in generative retrieval models.
翻译:文本检索任务主要以两种方式执行:双编码方法和基因化方法。双编码方法将文档和查询嵌入到共同矢量空间并进行近邻搜索。它精确地显示不同领域的高性能和效率,但在L2或内部产品空间互动时有一个嵌入的空间瓶颈。基因检索模型通过生成目标序列和在参数空间互动克服嵌入空间瓶颈来检索。但是,它未能检索在培训过程中没有看到的信息,因为它完全取决于在其自身模型参数参数中编码的信息。为了利用这两种方法的优势,我们提议了“内化”的生成检索模型,该模型在L2 或内部产品空间进行互动时使用了“内嵌入式”(语言模型的外嵌入),作为在基因化检索的分解分解步骤中嵌入“内嵌入”。该模型使用非对本地化的标识嵌入空间和6个基因化的修复空间来进行编码。我们使用“内嵌入”的“内嵌入”方法,仅用“内嵌入”的“内嵌入”工具,只用“内嵌入”的“内嵌入”的“内嵌入”方式,只显示“内存”中的“内存”的“内存”二号”。