To reduce the spread of misinformation, social media platforms may take enforcement actions against offending content, such as adding informational warning labels, reducing distribution, or removing content entirely. However, both their actions and their inactions have been controversial and plagued by allegations of partisan bias. When it comes to specific content items, surprisingly little is known about what ordinary people want the platforms to do. We provide empirical evidence about a politically balanced panel of lay raters' preferences for three potential platform actions on 368 news articles. Our results confirm that on many articles there is a lack of consensus on which actions to take. We find a clear hierarchy of perceived severity of actions with a majority of raters wanting informational labels on the most articles and removal on the fewest. There was no partisan difference in terms of how many articles deserve platform actions but conservatives did prefer somewhat more action on content from liberal sources, and vice versa. We also find that judgments about two holistic properties, misleadingness and harm, could serve as an effective proxy to determine what actions would be approved by a majority of raters.
翻译:为了减少虚假信息的传播,社交媒体平台可能会对不良内容采取强制措施,例如添加信息性警告标签、减少分发或完全删除内容。然而,他们的行为和不作为都备受争议,存在政治偏见的指责。针对具体内容,人们对平台采取的行动有哪些期望却知之甚少。本文基于一个政治上平衡的評判小组,对368篇新闻文章进行了三种潜在平台行动的偏好调查。研究结果表明,在许多文章中,人们对采取哪种行动缺乏共识。我们发现,大多数训练者希望在大多数文章中添加信息标签,在少数文章中删除内容。随着文章中被感应的属性的级别提高,会发现清晰的行动严重程度等级。在文章中,没有政党的差异,但保守派对自由派来源的内容更希望采取一些行动,反之亦然。我们还发现,对于两个全局特性——误导性和危害性的判断,可以作为确定训练者大多数批准的行动的有效替代。