The ability to discern between true and false information is essential to making sound decisions. However, with the recent increase in AI-based disinformation campaigns, it has become critical to understand the influence of deceptive systems on human information processing. In experiment (N=128), we investigated how susceptible people are to deceptive AI systems by examining how their ability to discern true news from fake news varies when AI systems are perceived as either human fact-checkers or AI fact-checking systems, and when explanations provided by those fact-checkers are either deceptive or honest. We find that deceitful explanations significantly reduce accuracy, indicating that people are just as likely to believe deceptive AI explanations as honest AI explanations. Although before getting assistance from an AI-system, people have significantly higher weighted discernment accuracy on false headlines than true headlines, we found that with assistance from an AI system, discernment accuracy increased significantly when given honest explanations on both true headlines and false headlines, and decreased significantly when given deceitful explanations on true headlines and false headlines. Further, we did not observe any significant differences in discernment between explanations perceived as coming from a human fact checker compared to an AI-fact checker. Similarly, we found no significant differences in trust. These findings exemplify the dangers of deceptive AI systems and the need for finding novel ways to limit their influence human information processing.
翻译:辨别真实和虚假信息的能力对于做出正确决定至关重要。然而,随着最近AI型假信息运动的增加,理解欺骗性AI系统对人类信息处理的影响变得至关重要。在实验中(N=128),我们调查了人们如何能够从假新闻中辨别真实消息。在AI系统被视为人类事实检查师或AI事实检查系统时,当这些事实检查师提供的解释不是欺骗性的就是诚实的时,发现欺骗性的解释会大大降低准确性,表明人们同样可能相信欺骗性AI解释是诚实的AI解释。尽管在从AI系统获得协助之前,人们在假头条上的加权辨别精确度大大高于真正的头条,但我们发现,在AI系统被视为诚实地解释真实头条和假头条时,识别准确性会大为增加,而当这些事实检查者给出欺骗性的解释和假头条时,准确性会明显减少。此外,我们没有看到人们在从AI系统获得帮助之前,在辨别错误的AI解释方面有任何重大差异。