This article examines public exposure to and perceptions of deepfakes based on insights from a nationally representative survey of 1403 UK adults. The survey is one of the first of its kind since recent improvements in deepfake technology and widespread adoption of political deepfakes. The findings reveal three key insights. First, on average, 15% of people report exposure to harmful deepfakes, including deepfake pornography, deepfake frauds/scams and other potentially harmful deepfakes such as those that spread health/religious misinformation/propaganda. In terms of common targets, exposure to deepfakes featuring celebrities was 50.2%, whereas those featuring politicians was 34.1%. And 5.7% of respondents recall exposure to a selection of high profile political deepfakes in the UK. Second, while exposure to harmful deepfakes was relatively low, awareness of and fears about deepfakes were high (and women were significantly more likely to report experiencing such fears than men). As with fears, general concerns about the spread of deepfakes were also high; 90.4% of the respondents were either very concerned or somewhat concerned about this issue. Most respondents (at least 91.8%) were concerned that deepfakes could add to online child sexual abuse material, increase distrust in information and manipulate public opinion. Third, while awareness about deepfakes was high, usage of deepfake tools was relatively low (8%). Most respondents were not confident about their detection abilities and were trustful of audiovisual content online. Our work highlights how the problem of deepfakes has become embedded in public consciousness in just a few years; it also highlights the need for media literacy programmes and other policy interventions to address the spread of harmful deepfakes.
翻译:暂无翻译