Recent work has shown that inducing a large language model (LLM) to generate explanations prior to outputting an answer is an effective strategy to improve performance on a wide range of reasoning tasks. In this work, we show that neural rankers also benefit from explanations. We use LLMs such as GPT-3.5 to augment retrieval datasets with explanations and train a sequence-to-sequence ranking model to output a relevance label and an explanation for a given query-document pair. Our model, dubbed ExaRanker, finetuned on a few thousand examples with synthetic explanations performs on par with models finetuned on 3x more examples without explanations. Furthermore, the ExaRanker model incurs no additional computational cost during ranking and allows explanations to be requested on demand.
翻译:最近的工作表明,引导一个大型语言模型(LLM)在输出答案之前作出解释,是改进一系列推理任务业绩的有效战略。在这项工作中,我们显示神经排层也得益于解释。我们利用GPT-3.5等LLMs来增加检索数据集,并用解释来增加检索数据集,并训练一个顺序顺序排序模型来输出一个相关标签和给定查询文件对子的解释。我们的模型,称为ExaRanker,以几千个带有合成解释的示例进行微调,与对3x以上示例进行微调的模型相同,而没有解释。此外,ExaRanker模型在排名期间没有产生额外的计算成本,并且允许根据需求要求解释。