We present an end-to-end differentiable training method for retrieval-augmented open-domain question answering systems that combine information from multiple retrieved documents when generating answers. We model retrieval decisions as latent variables over sets of relevant documents. Since marginalizing over sets of retrieved documents is computationally hard, we approximate this using an expectation-maximization algorithm. We iteratively estimate the value of our latent variable (the set of relevant documents for a given question) and then use this estimate to update the retriever and reader parameters. We hypothesize that such end-to-end training allows training signals to flow to the reader and then to the retriever better than staged-wise training. This results in a retriever that is able to select more relevant documents for a question and a reader that is trained on more accurate documents to generate an answer. Experiments on three benchmark datasets demonstrate that our proposed method outperforms all existing approaches of comparable size by 2-3% absolute exact match points, achieving new state-of-the-art results. Our results also demonstrate the feasibility of learning to retrieve to improve answer generation without explicit supervision of retrieval decisions.
翻译:我们为检索增强的开放式问题解答系统提出了一个端到端的不同培训方法,该方法在生成答案时将多个检索文档中的信息综合起来。我们将检索决定作为相关文档的潜伏变量进行模型。由于检索到的成套文档的边际化是计算上很困难的,因此我们用期望最大化算法来估计这一点。我们迭代地估计了我们潜在变量的价值(特定问题的一套相关文件),然后使用这一估计来更新检索器和阅读器参数。我们假设了这种端到端培训能够使培训信号流给读者,然后比分阶段培训好。这导致检索器能够为问题选择更多相关文件,而读者则接受更准确的文件培训以产生答案。对三个基准数据集的实验表明,我们所提议的方法比所有类似规模的现有方法都高出了2-3%的绝对精确匹配点,并取得了新的最新结果。我们的结果还表明,在不明确监督检索决定的情况下,学习改进答案生成的可行性。