While large-scale pre-trained language models like BERT have advanced the state-of-the-art in IR, its application in query performance prediction (QPP) is so far based on pointwise modeling of individual queries. Meanwhile, recent studies suggest that the cross-attention modeling of a group of documents can effectively boost performances for both learning-to-rank algorithms and BERT-based re-ranking. To this end, a BERT-based groupwise QPP model is proposed, in which the ranking contexts of a list of queries are jointly modeled to predict the relative performance of individual queries. Extensive experiments on three standard TREC collections showcase effectiveness of our approach. Our code is available at https://github.com/VerdureChen/Group-QPP.
翻译:虽然像BERT这样的大规模预先培训语言模型在IR中提高了最新水平,但其在查询性能预测(QPP)中的应用迄今为止是基于个别查询的点建模。与此同时,最近的研究表明,一组文件的交叉注意型建模可有效提高学习到排位算法和基于BERT的重新排位的性能。为此,提议了一个基于BERT的集团质素模型,其中对查询清单的排名背景进行联合建模,以预测个别查询的相对性能。关于三个标准TREC收藏的广泛实验展示了我们的方法的有效性。我们的代码可在https://github.com/VerdureChen/Group-QPP上查阅。