Social media platforms are increasingly deploying complex interventions to help users detect false news. Labeling false news using techniques that combine crowd-sourcing with artificial intelligence (AI) offers a promising way to inform users about potentially low-quality information without censoring content, but also can be hard for users to understand. In this study, we examine how users respond in their sharing intentions to information they are provided about a hypothetical human-AI hybrid system. We ask i) if these warnings increase discernment in social media sharing intentions and ii) if explaining how the labeling system works can boost the effectiveness of the warnings. To do so, we conduct a study ($N=1473$ Americans) in which participants indicated their likelihood of sharing content. Participants were randomly assigned to a control, a treatment where false content was labeled, or a treatment where the warning labels came with an explanation of how they were generated. We find clear evidence that both treatments increase sharing discernment, and directional evidence that explanations increase the warnings' effectiveness. Interestingly, we do not find that the explanations increase self-reported trust in the warning labels, although we do find some evidence that participants found the warnings with the explanations to be more informative. Together, these results have important implications for designing and deploying transparent misinformation warning labels, and AI-mediated systems more broadly.
翻译:社交媒体平台正在越来越多地部署复杂的干预措施来帮助用户检测假消息。 使用将众包与人工智能相结合的技术(AI)来标出假消息,这为让用户了解潜在低质量信息提供了很有希望的方法,而无需审查内容,但也很难让用户理解。 在这次研究中,我们审视用户如何在共享意向时回应他们提供的假人-AI混合系统的信息。 我们问一)这些警告是否提高了社交媒体共享意向中的识别力,二)如果解释标签系统如何运作可以提高警告的有效性。为此,我们开展了一项研究(N=1473美元),参与者在研究中表明了分享内容的可能性。参与者被随机分配到一种控制、一种贴有错误内容标签的处理方法,或者一种处理警告标签解释其生成方式的处理方法。我们发现明确证据表明,两种治疗方法都增加了共享感知力,以及解释增加警告有效性的方向证据。有趣的是,我们并不认为解释会增加警告标签中的自我报告信任,尽管我们发现了一些证据,参与者发现他们有可能分享内容。 将警告与透明性标签一起进行更多的信息。