Reranker improves retrieval performance by capturing document interactions. At one extreme, graph-aware adaptive retrieval (GAR) represents an information-rich regime, requiring a pre-computed document similarity graph in reranking. However, as such graphs are often unavailable, or incur quadratic memory costs even when available, graph-free rerankers leverage large language model (LLM) calls to achieve competitive performance. We introduce L2G, a novel framework that implicitly induces document graphs from listwise reranker logs. By converting reranker signals into a graph structure, L2G enables scalable graph-based retrieval without the overhead of explicit graph computation. Results on the TREC-DL and BEIR subset show that L2G matches the effectiveness of oracle-based graph methods, while incurring zero additional LLM calls.
翻译:重排序器通过捕捉文档间的交互作用来提升检索性能。在图感知自适应检索(GAR)这一极端情况下,其代表了一种信息丰富的机制,需要在重排序过程中使用预计算的文档相似度图。然而,由于此类图通常难以获取,或即使可获取也会产生二次内存开销,无图重排序器通过调用大语言模型(LLM)来实现具有竞争力的性能。本文提出L2G,一种从列表重排序器日志中隐式推导文档图的新型框架。通过将重排序信号转换为图结构,L2G实现了可扩展的基于图的检索,而无需显式图计算的开销。在TREC-DL和BEIR子集上的实验结果表明,L2G在达到基于预知图方法同等效果的同时,无需额外调用LLM。