As the last few years have seen an increase in online hostility and polarization both, we need to move beyond the fack-checking reflex or the praise for better moderation on social networking sites (SNS) and investigate their impact on social structures and social cohesion. In particular, the role of recommender systems deployed at large scale by digital platforms such as Facebook or Twitter has been overlooked. This paper draws on the literature on cognitive science, digital media, and opinion dynamics to propose a faithful replica of the entanglement between recommender systems, opinion dynamics and users' cognitive biais on SNSs like Twitter that is calibrated over a large scale longitudinal database of tweets from political activists. This model makes it possible to compare the consequences of various recommendation algorithms on the social fabric and to quantify their interaction with some major cognitive bias. In particular, we demonstrate that the recommender systems that seek to solely maximize users' engagement necessarily lead to an overexposure of users to negative content (up to 300\% for some of them), a phenomenon called algorithmic negativity bias, to a polarization of the opinion landscape, and to a concentration of social power in the hands of the most toxic users. The latter are more than twice as numerous in the top 1\% of the most influential users than in the overall population. Overall, our findings highlight the urgency to identify harmful implementations of recommender systems to individuals and society in order better regulate their deployment on systemic SNSs.
翻译:随着过去几年在线敌对和极端化的增加,我们需要超越事实检查和赞扬更好的社交网络(SNS)的管理,来研究它们对社会结构和社会凝聚力的影响。特别是,数字平台如Facebook或Twitter大规模部署的推荐系统的作用被忽视了。本文结合了认知科学、数字媒体和意见动态的文献,提出了一个忠实的网络推荐系统、意见动态和用户认知偏差在Twitter等SNS上纠缠的仿真模型,该模型根据政治活动家推特的大规模纵向数据库进行校准。这个模型使我们能够比较各种推荐算法对社会结构的影响,并量化它们与一些主要认知偏差的相互作用。特别地,我们证明了那些仅寻求最大化用户参与的推荐系统必然导致用户过度接触负面内容(某些系统高达300%),这种现象被称为算法负面偏见,导致意见景观极化,并使最有毒的用户集中了社会权力。他们在最具影响力的1%的用户中比整个人口多两倍以上。总的来说,我们的发现强调了识别对个人和社会有害的推荐系统实施对制度性SNSs进行更好调节的迫切性。