Online platforms employ commercial content moderators and use automated systems to identify and remove the most blatantly inappropriate content for all users. They also provide moderation settings that let users personalize their preferences for which posts they want to avoid seeing. This study presents the results of a nationally representative survey of 984 US adults. We examine how users would prefer for three categories of norm-violating content (hate speech, sexually explicit content, and violent content) to be regulated. Specifically, we analyze whether users prefer platforms to remove such content for all users or leave it up to each user to decide if and how much they want to moderate it. We explore the influence of presumed effects on others (PME3) and support for freedom of expression on user attitudes, the two critical factors identified as relevant for social media censorship attitudes by prior literature, about this choice. We find perceived negative effects on others and free speech support as significant predictors of preference for having personal moderation settings over platform-directed moderation for regulating each speech category. Our findings show that platform governance initiatives need to account for both the actual and perceived media effects of norm-violating speech categories to increase user satisfaction. Our analysis also suggests that people do not see personal moderation tools as an infringement on others' free speech but as a means to assert greater agency to shape their social media feeds.
翻译:在线平台使用商业内容管理器,并使用自动化系统来识别和删除所有用户最明显不适当的内容。它们还提供温和的设置,使用户能够将自己喜欢的偏好个人化。本研究报告介绍了对984名美国成年人进行的具有全国代表性的调查的结果。我们研究了用户将如何更喜欢监管三类违反规范的内容(仇恨言论、性明确内容和暴力内容),具体地说,我们分析用户是更喜欢平台删除所有用户的此类内容,还是让每个用户来决定他们是否和在多大程度上想要调节这些内容。我们探讨了假定对他人的影响(PME3),以及支持言论自由对用户态度的影响,这两个关键因素被先前文献确定为与社会媒体审查态度相关的、与这一选择相关的因素。我们发现,对于其他人和自由言论支持被视为偏爱个人温和环境而不是对管理每一类言论的节制的重要预测因素。我们的研究结果表明,平台治理举措需要既考虑到规范性言论类别的实际和感知到的媒体影响,也需由他们自己来决定。我们探讨了对用户态度的满意度。我们的分析还表明,人们认为,个人节制工具不会将个人节制性工具视为更有利于他人。