Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as "more human than human." We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.
翻译:通过聊天、电子邮件和社交媒体,人工智能系统建议文字、句子或生成整个对话。人工智能生成的语言通常不被确认为是这种语言,而是作为人写的语言来呈现,引起人们对新颖的欺骗和操纵形式的关切。在这里,我们研究人类如何辨别人工智能是否产生了自我口头陈述,这是个人和后果最大的语言形式之一。在6个实验中,参与者(N=4 600)无法检测到由专业、招待和约会环境中最先进的人工智能语言模型产生的自我陈述。对语言特征的计算分析表明,人工智能生成语言的人类判断受到直观但有缺陷的偏重论的阻碍,例如将第一人称、缩缩写或家庭话题与人写语言联系起来。我们实验性地证明,这些超自然论使得人工生成语言的人类判断具有可预见性和可调性,使得人工智能系统能够产生“比人更人性的”的文本。我们讨论解决方案,例如人工智能口音等,以降低人类感官的可变性。</s>