Dealing with unjudged documents ("holes") in relevance assessments is a perennial problem when evaluating search systems with offline experiments. Holes can reduce the apparent effectiveness of retrieval systems during evaluation and introduce biases in models trained with incomplete data. In this work, we explore whether large language models can help us fill such holes to improve offline evaluations. We examine an extreme, albeit common, evaluation setting wherein only a single known relevant document per query is available for evaluation. We then explore various approaches for predicting the relevance of unjudged documents with respect to a query and the known relevant document, including nearest neighbor, supervised, and prompting techniques. We find that although the predictions of these One-Shot Labelers (1SLs) frequently disagree with human assessments, the labels they produce yield a far more reliable ranking of systems than the single labels do alone. Specifically, the strongest approaches can consistently reach system ranking correlations of over 0.85 with the full rankings over a variety of measures. Meanwhile, the approach substantially reduces the false positive rate of t-tests due to holes in relevance assessments (from 15-30% down to under 5%), giving researchers more confidence in results they find to be significant.
翻译:在相关评估中处理未经判断的文件(“洞”)是一个常年问题,在用离线实验来评价搜索系统时,这是一个常年问题。 洞洞可以降低评估期间检索系统的明显效力,并在经过不完全数据培训的模型中引入偏差。 在这项工作中,我们探讨大型语言模型能否帮助我们填补这些漏洞,以改进离线评价。 我们研究一个极端的尽管常见的评价环境,即每个查询只有单一的已知相关文件可供评价。 然后我们探索各种办法,预测未经判断的文件与查询和已知相关文件的相关性,包括最近的邻居、受监督和提示技术。 我们发现,虽然对这些单热拉贝仪的预测往往与人类评估不相符,但它们产生的标签产生的系统排名比单一标签本身产生的排名要可靠得多。 具体地说,最强的方法可以始终达到系统排名0.85以上与一系列措施的全部排名之间的关联。 同时,该办法大大降低了因相关评估中出现空洞(从15-30%下降到5%以下)而导致的测试的误正率率率。