Recent advances in deep learning have enabled realistic digital alterations to videos, known as deepfakes. This technology raises important societal concerns regarding disinformation and authenticity, galvanizing the development of numerous deepfake detection algorithms. At the same time, there are significant differences between training data and in-the-wild video data, which may undermine their practical efficacy. We simulate data corruption techniques and examine the performance of a state-of-the-art deepfake detection algorithm on corrupted variants of the FaceForensics++ dataset. While deepfake detection models are robust against video corruptions that align with training-time augmentations, we find that they remain vulnerable to video corruptions that simulate decreases in video quality. Indeed, in the controversial case of the video of Gabonese President Bongo's new year address, the algorithm, which confidently authenticates the original video, judges highly corrupted variants of the video to be fake. Our work opens up both technical and ethical avenues of exploration into practical deepfake detection in global contexts.
翻译:最近深层学习的进展使得对视频进行现实的数码改变成为现实,称为深假。这一技术引起了社会对虚假信息和真实性的重要关注,激发了许多深假检测算法的发展。与此同时,培训数据和微博视频数据之间存在巨大差异,这可能会损害其实际效力。我们模拟了数据腐败技术,并审查了关于FaceForensics++数据集腐败变体的先进深假检测算法的性能。虽然深假检测模型对与培训时间增强相匹配的视频腐败非常活跃,但我们发现它们仍然容易受到视频腐败的伤害,而视频质量却被模拟下降。事实上,在加蓬总统邦戈新年地址的有争议的视频中,该算法令人信服地认证了原始视频,法官们对视频变异体的高度腐败进行假冒。我们的工作开启了技术和道德探索的渠道,以在全球环境中进行实际的深假发现。