Learning-to-rank (LTR) algorithms are ubiquitous and necessary to explore the extensive catalogs of media providers. To avoid the user examining all the results, its preferences are used to provide a subset of relatively small size. The user preferences can be inferred from the interactions with the presented content if explicit ratings are unavailable. However, directly using implicit feedback can lead to learning wrong relevance models and is known as biased LTR. The mismatch between implicit feedback and true relevances is due to various nuisances, with position bias one of the most relevant. Position bias models consider that the lack of interaction with a presented item is not only attributed to the item being irrelevant but because the item was not examined. This paper introduces a method for modeling the probability of an item being seen in different contexts, e.g., for different users, with a single estimator. Our suggested method, denoted as contextual (EM)-based regression, is ranker-agnostic and able to correctly learn the latent examination probabilities while only using implicit feedback. Our empirical results indicate that the method introduced in this paper outperforms other existing position bias estimators in terms of relative error when the examination probability varies across queries. Moreover, the estimated values provide a ranking performance boost when used to debias the implicit ranking data even if there is no context dependency on the examination probabilities.
翻译:学习到排序( LTR) 算法无处不在,对于探索媒体提供者的广泛目录十分必要。 为了避免用户检查所有结果,使用它的偏好来提供相对小的子集。 如果没有明确的评级, 用户的偏好可以从与所提供内容的互动中推断出来。 但是, 直接使用隐含的反馈可能导致学习错误的相关性模型, 并被称为偏颇 LTR。 隐含反馈和真实相关性之间的不匹配是由于各种偏差, 其中的位置偏差是最重要的。 定位偏差模型认为, 与所提供项目缺乏互动的原因不仅仅是项目无关, 而且是该项目没有被审查的原因。 本文介绍了一种方法, 在不同的场合, 例如, 不同用户, 使用一个单一的估算符, 直接使用隐含的反馈, 直接使用隐含的反馈, 并能够正确了解潜在审查的概率。 我们的经验结果表明, 本文中引入的方法, 不仅因为项目无关紧要, 是因为项目不相干, 而且因为项目没有相关内容。