The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results.
翻译:信息检索是许多自然语言处理系统的重要组成部分,例如开放式域名回答。传统方法基于手工制作的特征,但基于神经网络的连续表述最近取得了竞争性结果。使用这种方法的挑战是如何获得监督数据,以培训检索模型,与对查询和支持文件相对应。在本文中,我们提出一种技术,在知识蒸馏的启发下,学习下游任务的检索模型,而不需要附加注释的查询和文件。我们的方法利用读者模型的分数,用来根据检索文件解决任务,为检索者获取合成标签。我们评估我们的问题回答方法,获得最先进的结果。