Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems produce smart replies, autocompletes, and translations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are handicapped by intuitive but flawed heuristics such as associating first-person pronouns, spontaneous wording, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce language perceived as more human than human. We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.
翻译:通过聊天、电子邮件和社交媒体,AI系统产生智能答复、自动完成和翻译。AI产生的语言往往不被确认为是这种语言,而是作为人写的语言来呈现,引起人们对新形式的欺骗和操纵的担忧。在这里,我们研究人类如何辨别AI产生的语言是否是口头自我介绍,这是个人和后果最大的语言形式之一。在6个实验中,参与者(N=4 600)无法在专业、招待和约会环境中发现由最先进的AI语言模型产生的自我介绍。对AI产生的语言特征的计算分析表明,人类对AI产生的语言的判断由于直观的但有缺陷的超常态而受阻,例如将第一人称、自发的文字或家庭话题与人写的语言联系起来。我们实验性地证明,这些超自然论使得人工生成语言的人类判断具有可预见性和可乘性,使得AI系统能够产生被认为比人类更具有人性的语言。我们讨论解决方案,例如AI口音等,以减少由AI型生成的人类语言的颠覆性。