This study investigates a specific form of positional bias, termed the Myopic Trap, where retrieval models disproportionately attend to the early parts of documents while overlooking relevant information that appears later. To systematically quantify this phenomenon, we propose a semantics-preserving evaluation framework that repurposes the existing NLP datasets into position-aware retrieval benchmarks. By evaluating the SOTA models of full retrieval pipeline, including BM25, embedding models, ColBERT-style late-interaction models, and reranker models, we offer a broader empirical perspective on positional bias than prior work. Experimental results show that embedding models and ColBERT-style models exhibit significant performance degradation when query-related content is shifted toward later positions, indicating a pronounced head bias. Notably, under the same training configuration, ColBERT-style approach show greater potential for mitigating positional bias compared to the traditional single-vector approach. In contrast, BM25 and reranker models remain largely unaffected by such perturbations, underscoring their robustness to positional bias. Code and data are publicly available at: www.github.com/NovaSearch-Team/RAG-Retrieval.
翻译:本研究探讨了一种特定的位置偏差形式,称为“短视陷阱”,即检索模型过度关注文档的前部内容,而忽略了出现在后部的相关信息。为了系统量化这一现象,我们提出了一个语义保持的评估框架,将现有的自然语言处理数据集重新用于构建位置感知的检索基准。通过评估完整检索流程中的最先进模型,包括BM25、嵌入模型、ColBERT风格的延迟交互模型以及重排序模型,我们提供了比先前研究更广泛的关于位置偏差的实证视角。实验结果表明,当查询相关内容被移至文档后部位置时,嵌入模型和ColBERT风格模型表现出显著的性能下降,显示出明显的头部偏差。值得注意的是,在相同的训练配置下,与传统的单向量方法相比,ColBERT风格的方法在缓解位置偏差方面展现出更大的潜力。相比之下,BM25和重排序模型基本不受此类扰动的影响,突显了它们对位置偏差的鲁棒性。代码和数据已公开于:www.github.com/NovaSearch-Team/RAG-Retrieval。