Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model. Despite performing related tasks, the models have separate parameters and are weakly-coupled during training. We propose casting the retriever and the reranker as internal passage-wise attention mechanisms applied sequentially within the transformer architecture and feeding computed representations to the reader, with the hidden representations progressively refined at each stage. This allows us to use a single question answering model trained end-to-end, which is a more efficient use of model capacity and also leads to better gradient flow. We present a pre-training method to effectively train this architecture and evaluate our model on the Natural Questions and TriviaQA open datasets. For a fixed parameter budget, our model outperforms the previous state-of-the-art model by 1.0 and 0.7 exact match scores.
翻译:使用检索器模型的外部知识库,可选重排段落,采用单独的重新排序模型,并使用另一个阅读器模型生成答案。尽管执行相关任务,但模型有不同的参数,在培训期间相互交错不力。我们提议将检索器和重新排序器作为内部传道关注机制,在变压器结构中依次应用,并将计算出来的表达方式提供给读者,在每个阶段逐步完善。这使我们能够使用单一的回答问题模型,经过培训的终端到终端,这是对模型能力的更有效使用,并导致更好的梯度流。我们提出了一个培训前方法,以有效培训这一架构,并评估我们关于自然问题和TriviaQA开放数据集的模型。对于固定参数预算而言,我们的模型比先前的状态-艺术模型高出1.0和0.7的精确匹配分数。