Misinformation is a global problem in modern social media platforms with few solutions known to be effective. Social media platforms have offered tools to raise awareness of information, but these are closed systems that have not been empirically evaluated. Others have developed novel tools and strategies, but most have been studied out of context using static stimuli, researcher prompts, or low fidelity prototypes. We offer a new anti-misinformation agent grounded in theories of metacognition that was evaluated within Twitter. We report on a pilot study (n=17) and multi-part experimental study (n=57, n=49) where participants experienced three versions of the agent, each deploying a different strategy. We found that no single strategy was superior over the control. We also confirmed the necessity of transparency and clarity about the agent's underlying logic, as well as concerns about repeated exposure to misinformation and lack of user engagement.
翻译:在现代社交媒体平台上,错误信息是一个全球性的问题,针对此问题已知的解决方案很少。社交媒体平台已提供了提高信息认知的工具,但这些是封闭系统,没有经过实证评估。其他人已开发了新的工具和策略,但大多在使用静态刺激、研究人员示范或低保真度原型的情况下进行了上下文外研究。我们提供了一种基于元认知理论的新型抗误传信息代理,在Twitter内进行了评估。我们报告了一个试点研究(n = 17)和一个多部分实验研究(n = 57,n = 49),参与者体验了代理的三个版本,每个版本采用不同的策略。我们发现没有单一的策略优于控制组。我们还确认了透明度和明确代理基本逻辑的必要性,以及对重复接触错误信息和对用户参与的担忧。