Automated audits of recommender systems found that blindly following recommendations leads users to increasingly partisan, conspiratorial, or false content. At the same time, studies using real user traces suggest that recommender systems are not the primary driver of attention toward extreme content; on the contrary, such content is mostly reached through other means, e.g., other websites. In this paper, we explain the following apparent paradox: if the recommendation algorithm favors extreme content, why is it not driving its consumption? With a simple agent-based model where users attribute different utilities to items in the recommender system, we show through simulations that the collaborative-filtering nature of recommender systems and the nicheness of extreme content can resolve the apparent paradox: although blindly following recommendations would indeed lead users to niche content, users rarely consume niche content when given the option because it is of low utility to them, which can lead the recommender system to deamplify such content. Our results call for a nuanced interpretation of ``algorithmic amplification'' and highlight the importance of modeling the utility of content to users when auditing recommender systems. Code available: https://github.com/epfl-dlab/amplification_paradox.
翻译:自动化审计推荐系统发现,盲目地遵循推荐会让用户越来越偏见、阴谋论或错误的内容。同时,使用真正的用户痕迹的研究表明,推荐系统不是引导用户关注极端内容的主要驱动力;相反,这些内容大多是通过其他手段,例如其他网站获得的。在本文中,我们解释了以下明显的悖论:如果推荐算法偏爱极端内容,为什么它不推动它的消费?通过一个简单的基于代理的模型,在推荐系统中的项目上演示了通过模拟解决明显悖论的方法:尽管盲目遵循推荐确实会引导用户到尖端内容,但当使用者可以选择时,他们很少消费尖端内容,因为对他们来说实用性较低,这可能会导致推荐系统对这种内容进行降级。我们的研究结果呼吁对“算法放大”的细致解释,并强调在审计推荐系统时,对用户对内容的实用性进行建模的重要性。可用的代码:https://github.com/epfl-dlab/amplification_paradox。