An effective ranking model usually requires a large amount of training data to learn the relevance between documents and queries. User clicks are often used as training data since they can indicate relevance and are cheap to collect, but they contain substantial bias and noise. There has been some work on mitigating various types of bias in simulated user clicks to train effective learning-to-rank models based on multiple features. However, how to effectively use such methods on large-scale pre-trained models with real-world click data is unknown. To alleviate the data bias in the real world, we incorporate heuristic-based features, refine the ranking objective, add random negatives, and calibrate the propensity calculation in the pre-training stage. Then we fine-tune several pre-trained models and train an ensemble model to aggregate all the predictions from various pre-trained models with human-annotation data in the fine-tuning stage. Our approaches won 3rd place in the "Pre-training for Web Search" task in WSDM Cup 2023 and are 22.6% better than the 4th-ranked team.
翻译:有效的排名模型通常需要大量的培训数据,以了解文件和查询的相关性。用户点击往往被用作培训数据,因为可以表明相关性,而且收集成本低廉,但含有很大的偏差和噪音。在模拟用户点击中,已经做了一些工作,以减少各种类型的偏差,以培训基于多种特征的有效学习到排序模型。然而,如何有效地在大规模预先培训的模型中使用这类方法,并配有真实世界的点击数据。为了减轻真实世界的数据偏差,我们加入了基于超理论的特征,完善排名目标,添加随机负值,并校准培训前阶段的倾向性计算。然后,我们微调一些预先培训过的模型,并训练一个整体模型,以在微调阶段将各种经过培训的模型的所有预测与人类注解数据汇总在一起。我们在WSDDM Cup 2023的“网络搜索前培训”任务中赢得了第3个位置,比第4位团队好22.6%。