Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new state-of-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems. Code and pre-trained models at https://github.com/facebookresearch/SEAL.
翻译:知识密集型语言任务需要 NLP 系统来提供正确的答案, 并在给定的元素中检索辅助证据。 自动递减语言模型正在成为生成答案的“ 实际标准 ” 。 自动递增语言模型正在成为生成答案的“ 实际标准 ”, 新的和更加强大的系统正在以惊人的速度出现。 在本文中,我们争辩说, 所有这些( 和未来) 进展都可以直接应用于检索问题, 对模型结构的最小干预。 先前的工作探索了将搜索空间分割成等级结构的方法, 并通过自动递增生成其独特的识别符号来检索文档。 在这项工作中, 我们提出了一个在搜索空间中不强制任何结构的替代方案: 使用所有正文中的正文作为可能的识别符号。 这个设置允许我们使用自动递增模式来生成和分得分不同的 ngram, 然后通过一个高效的数据结构来绘制这些进展图, 。 想象性地说, 我们显示这不仅超越了先前的自动递增方法, 并且还导致平均改进了至少10个点, 超越了在KILT基准上的更固定的搜索级检索解决方案,, 建立新的州/ 亚 的下游模型, 。 在较轻的模型上建立较轻的模型中, 。