Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
翻译:深假是计算产生的实体,它们不真实地代表现实。它们可以采取图像、视频和音频模式,对系统和社会的许多领域构成威胁,包括网络安全和网络安全各个方面感兴趣的一个主题。2020年,一个咨询来自学术界、警务、政府、私营部门和国家安全机构的AI专家的讲习班将深假列为最严重的AI威胁。这些专家指出,由于假材料可以通过许多不受控制的路线传播,公民行为的变化可能是唯一的有效防御手段。这项研究的目的是评估人类从非深假图像(StyleGAN2:FFHQ)中找出深刻的人类脸部图像(StyleGAN2:FFHQ)的能力,并对许多系统和社会领域构成威胁,包括网络安全及网络安全的各个方面。2020年,一个咨询来自学术界、警务、政府、私营部门和国家安全机构的专家的讲习班将深度假造列为最严重的AI威胁。这些专家指出,由于伪造材料可以通过许多不受控制的路线传播,公民行为的变化可能是唯一的有效防御手段。研究的目的是评估人类脸部的图像(Stelender:如果每张图像(StelegleG):从不准确性图像(StyGHHQ)中)中的图像(Strefliefefefefef)中的图像(Sefefefefefill)中,那么正确解释一个不准确性的图像(fefill)中的图像(Ferent real)的准确性)的准确性是准确性),而需要一个精确度。在每五个参与者的准确性分析结果。