Pairwise debiasing is one of the most effective strategies in reducing position bias in learning-to-rank (LTR) models. However, limiting the scope of this strategy, are the underlying assumptions required by many pairwise debiasing approaches. In this paper, we develop an approach based on a minimalistic set of assumptions that can be applied to a much broader range of user browsing patterns and arbitrary presentation layouts. We implement the approach as a simplified version of the Unbiased LambdaMART and demonstrate that it retains the underlying unbiasedness property in a wider variety of settings than the original algorithm. Finally, using simulations with "golden" relevance labels, we will show that the simplified version compares favourably with the original Unbiased LambdaMART when the examination of different positions in a ranked list is not assumed to be independent.
翻译:Pairwise 贬低偏差是减少学习到排名模式中定位偏差的最有效战略之一。 但是,限制这一战略的范围,是许多双向偏差方法所要求的基本假设。 在本文中,我们根据一套最起码的假设制定一套办法,可适用于更广泛的用户浏览模式和任意展示布局。 我们采用这个办法,作为Unibised LambdaMART的简化版本,并表明它比原始算法在更广泛的环境中保留了基本的公正性属性。 最后,在使用“golden”相关标签的模拟方法时,我们将表明,在假定对排名列表中不同位置的审查不独立时,简化版优于原不偏向的LambdaMART。