Most state-of-the-art open-domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source. However, such retrieval models often require large memory to run because of the massive size of their passage index. In this paper, we introduce Binary Passage Retriever (BPR), a memory-efficient neural retrieval model that integrates a learning-to-hash technique into the state-of-the-art Dense Passage Retriever (DPR) to represent the passage index using compact binary codes rather than continuous vectors. BPR is trained with a multi-task objective over two tasks: efficient candidate generation based on binary codes and accurate reranking based on continuous vectors. Compared with DPR, BPR substantially reduces the memory cost from 65GB to 2GB without a loss of accuracy on two standard open-domain question answering benchmarks: Natural Questions and TriviaQA. Our code and trained models are available at https://github.com/studio-ousia/bpr.
翻译:大多数最先进的开放域解答系统使用神经检索模型,将通道编码成连续矢量,并从知识源中提取。然而,这种检索模型往往需要大量的内存才能运行,因为其通道索引的大小很大。在本文中,我们引入了二进制通道检索(BPR)(BPR),这是一种内存效率高的神经检索模型,将学习-光学技术纳入最先进的Dense Passage Retriever(DPR),以使用紧凑的二进制码而不是连续矢量来代表通过索引。BPR在两项任务上受过多重任务目标的培训:基于二进制码的有效候选人生成和基于连续矢量的准确重新排序。与DPR相比,BPR将记忆成本从65GB大幅降低到2GB,同时不丧失两个标准的开放域解答基准的准确性:自然问题和TriviaQA。我们的代码和经过培训的模型可在https://github.com/studio-usia/bpr。