Misinformation promotes distrust in science, undermines public health, and may drive civil unrest. Vaccine misinformation, in particular, has stalled efforts to overcome the COVID-19 pandemic, prompting social media platforms' attempts to reduce it. Some have questioned whether "soft" content moderation remedies -- e.g., flagging and downranking misinformation -- were successful, suggesting that the addition of "hard" content remedies -- e.g., deplatforming and content bans -- is necessary. We therefore examined whether Facebook's vaccine misinformation content removal policies were effective. Here, we show that Facebook's policies reduced the number of anti-vaccine posts but also caused several perverse effects: pro-vaccine content was also removed, engagement with remaining anti-vaccine content repeatedly recovered to pre-policy levels, and this content became more misinformative, more politically polarised, and more likely to be seen in users' newsfeeds. We explain these results as an unintended consequence of Facebook's design goal: promoting community formation. Members of communities dedicated to vaccine refusal appear to seek out misinformation from multiple sources. Community administrators make use of several channels afforded by the Facebook platform to disseminate misinformation. Our findings suggest the need to address how social media platform architecture enables community formation and mobilisation around misinformative topics when managing the spread of online content.
翻译:错误信息会助长对科学的不信任,损害公众健康,并可能导致内乱。疫苗错误信息特别阻碍了克服COVID-19大流行的努力,促使社交媒体平台试图减少这种流行病。有些人质疑“软”内容中和补救措施(例如挂旗和低级错误信息)是否成功,认为增加“硬”内容补救措施(例如变形和内容禁令)是必要的。因此,我们研究了Facebook疫苗错误内容删除政策是否有效。在这里,我们显示Facebook的政策减少了抗疫苗员额的数量,但也造成了一些反常效应:亲疫苗内容也被删除,与其余的抗疫苗内容的接触一再恢复到政策前的水平,而这种内容变得更加不明智,政治上更加两极化,更可能在用户的新闻网友中看到这些结果。我们将这些结果解释为Facebook设计目标的一个意外后果:促进社区形成。致力于疫苗拒绝的社区成员似乎从多个来源寻求错误信息。社区管理员利用若干渠道在Facebook平台上提供的内容来管理网络信息平台,从而在传播我们的信息信息时,如何通过网络平台的黑市化平台来改变我们的信息。