Collaborative filtering (CF) models easily suffer from popularity bias, which makes recommendation deviate from users' actual preferences. However, most current debiasing strategies are prone to playing a trade-off game between head and tail performance, thus inevitably degrading the overall recommendation accuracy. To reduce the negative impact of popularity bias on CF models, we incorporate Bias-aware margins into Contrastive loss and propose a simple yet effective BC Loss, where the margin tailors quantitatively to the bias degree of each user-item interaction. We investigate the geometric interpretation of BC loss, then further visualize and theoretically prove that it simultaneously learns better head and tail representations by encouraging the compactness of similar users/items and enlarging the dispersion of dissimilar users/items. Over eight benchmark datasets, we use BC loss to optimize two high-performing CF models. On various evaluation settings (i.e., imbalanced/balanced, temporal split, fully-observed unbiased, tail/head test evaluations), BC loss outperforms the state-of-the-art debiasing and non-debiasing methods with remarkable improvements. Considering the theoretical guarantee and empirical success of BC loss, we advocate using it not just as a debiasing strategy, but also as a standard loss in recommender models.
翻译:合作过滤模式很容易受到受欢迎偏见的影响,使得建议偏离了用户的实际偏好;然而,目前大多数偏向性战略容易在头和尾的性能之间进行权衡,从而不可避免地降低总体建议准确性;为减少对CF模式的偏向性负面影响,我们将Bias-aware差值纳入对CF模式的对比性损失,并提议一个简单而有效的BC损失,使差值在数量上适应每个用户项目互动的偏差程度;我们调查对BC损失的几何解释,然后进一步视觉化并在理论上证明,通过鼓励类似用户/项目的紧凑和扩大不同用户/项目的分散性,它同时学习了更好的头和尾表现;在8个基准数据集中,我们利用BC损失优化两种高效的CFF模式;在各种评价环境中(即不平衡/平衡、时间分割、完全不偏重、尾/头测试评价),BC损失超过对BC损失的状态,然后进一步视觉化和从理论上证明它同时学会了更好的头部和尾部表现,鼓励类似的用户/项目缩小其头部;我们用理论保证和实验性战略来建议BC损失的成功。