Neural document retrievers, including dense passage retrieval (DPR), have outperformed classical lexical-matching retrievers, such as BM25, when fine-tuned and tested on specific question-answering datasets. However, it has been shown that the existing dense retrievers do not generalize well not only out of domain but even in domain such as Wikipedia, especially when a named entity in a question is a dominant clue for retrieval. In this paper, we propose an approach toward in-domain generalization using the embeddings generated by the frozen language model trained with the entities in the domain. By not fine-tuning, we explore the possibility that the rich knowledge contained in a pretrained language model can be used for retrieval tasks. The proposed method outperforms conventional DPRs on entity-centric questions in Wikipedia domain and achieves almost comparable performance to BM25 and state-of-the-art SPAR model. We also show that the contextualized keys lead to strong improvements compared to BM25 when the entity names consist of common words. Our results demonstrate the feasibility of the zero-shot retrieval method for entity-centric questions of Wikipedia domain, where DPR has struggled to perform.
翻译:神经文件检索器,包括密集的通道检索器(DPR),在对特定答题数据集进行微调和测试时,优于典型的词汇匹配检索器,如BM25,在对特定答题数据集进行微调和测试时,显示现有密集检索器不仅没有在域外,甚至在维基百科等域内,广泛推广,特别是当一个被点名的实体是一个主要检索线索时。在本文中,我们建议采用一种办法,利用与域内实体培训的冻结语言模型产生的嵌入器,在域内普遍化。我们通过不微调,探索了将预先训练的语言模型中的丰富知识用于检索任务的可能性。拟议的方法在维基百科域实体中心问题上优于常规DPR,并取得了几乎与BM25和最先进的SPAR模型相似的性能。我们还表明,在实体名称由通用词组成时,背景化键可导致与BM25相比的大幅改进。我们的结果表明,在维基百科域域内实体中心问题的零射检索方法的可行性,而DPR已经进行斗争。</s>