Growing evidence suggests that YouTube's recommendation algorithm plays a role in online radicalization via surfacing extreme content. Radical Islamist groups, in particular, have been profiting from the global appeal of YouTube to disseminate hate and jihadist propaganda. In this quantitative, data-driven study, we investigate the prevalence of religiously intolerant Arabic YouTube videos, the tendency of the platform to recommend such videos, and how these recommendations are affected by demographics and watch history. Based on our deep learning classifier developed to detect hateful videos and a large-scale dataset of over 350K videos, we find that Arabic videos targeting religious minorities are particularly prevalent in search results (30%) and first-level recommendations (21%), and that 15% of overall captured recommendations point to hateful videos. Our personalized audit experiments suggest that gender and religious identity can substantially affect the extent of exposure to hateful content. Our results contribute vital insights into the phenomenon of online radicalization and facilitate curbing online harmful content.
翻译:越来越多的证据表明,YouTube的建议算法在网上通过浏览极端内容实现激进化方面发挥了作用。 激进伊斯兰团体尤其从YouTube全球呼吁传播仇恨和圣战宣传中获益。 在这项定量、数据驱动的研究中,我们调查了宗教不宽容的阿拉伯文YouTube视频的流行程度、平台推荐此类视频的倾向以及这些建议如何受到人口统计学和历史的影响。 根据我们深层次的学习分类法,我们开发了可探测仇恨视频和350K视频的大规模数据集,我们发现针对宗教少数群体的阿拉伯语视频在搜索结果(30%)和一级建议(21%)中特别流行,而且被捕获的建议中15%的总数指向仇恨视频。我们的个人化审计实验表明,性别和宗教身份可以极大地影响受仇恨内容影响的程度。 我们的成果有助于对网上激进化现象的重要洞察,并有助于遏制网上有害内容。