In recent years, Natural Language Generation (NLG) techniques in AI (e.g., T5, GPT-3, ChatGPT) have shown a massive improvement and are now capable of generating human-like long coherent texts at scale, yielding so-called deepfake texts. This advancement, despite their benefits, can also cause security and privacy issues (e.g., plagiarism, identity obfuscation, disinformation attack). As such, it has become critically important to develop effective, practical, and scalable solutions to differentiate deepfake texts from human-written texts. Toward this challenge, in this work, we investigate how factors such as skill levels and collaborations impact how humans identify deepfake texts, studying three research questions: (1) do collaborative teams detect deepfake texts better than individuals? (2) do expert humans detect deepfake texts better than non-expert humans? (3) what are the factors that maximize the detection performance of humans? We implement these questions on two platforms: (1) non-expert humans or asynchronous teams on Amazon Mechanical Turk (AMT) and (2) expert humans or synchronous teams on the Upwork. By analyzing the detection performance and the factors that affected performance, some of our key findings are: (1) expert humans detect deepfake texts significantly better than non-expert humans, (2) synchronous teams on the Upwork detect deepfake texts significantly better than individuals, while asynchronous teams on the AMT detect deepfake texts weakly better than individuals, and (3) among various error categories, examining coherence and consistency in texts is useful in detecting deepfake texts. In conclusion, our work could inform the design of future tools/framework to improve collaborative human detection of deepfake texts.
翻译:近年来,AI 中的自然语言生成 (NLG) 技术 (例如,T5、GPT-3、ChatGPT) 取得了巨大的进步,现在能够以规模生成类似于人类的长篇连贯文本,从而产生了所谓的 Deepfake 文本。尽管它们有好处,但这种进步也可能会导致安全和隐私问题 (例如,抄袭、身份混淆、虚假信息攻击)。因此,开发有效、实用和可扩展的解决方案以区分 Deepfake 文本和人类撰写的文本变得极其重要。为了应对这一挑战,在这项工作中,我们调查技能水平和合作如何影响人类识别 Deepfake 文本的因素,研究了三个研究问题:(1) 集体团队比个体更能检测出 Deepfake 文本吗?(2) 专业人士比非专业人士更能检测出 Deepfake 文本吗?(3) 最大化人类检测性能的因素是什么?我们将这些问题实现在两个平台上:(1) Amazon Mechanical Turk 上的非专家人类或异步团队,(2) Upwork 上的专业人类或同步团队。通过分析检测性能和影响性能的因素,我们的几个关键发现是:(1) 专业人员比非专业人员更能检测出 Deepfake 文本,(2) Upwork 上的同步团队比个体更能检测出 Deepfake 文本,而 AMT 上的异步团队比个体更能弱化检测出 Deepfake 文本,(3) 在各种错误类别中,检查文本的连贯性和一致性有助于检测 Deepfake 文本。总之,我们的工作可以为未来改进协作人类检测 Deepfake 文本的工具/框架的设计提供指导。