Thanks to the fast progress in synthetic media generation, creating realistic false images has become very easy. Such images can be used to wrap "rich" fake news with enhanced credibility, spawning a new wave of high-impact, high-risk misinformation campaigns. Therefore, there is a fast-growing interest in reliable detectors of manipulated media. The most powerful detectors, to date, rely on the subtle traces left by any device on all images acquired by it. In particular, due to proprietary in-camera processes, like demosaicing or compression, each camera model leaves trademark traces that can be exploited for forensic analyses. The absence or distortion of such traces in the target image is a strong hint of manipulation. In this paper, we challenge such detectors to gain better insight into their vulnerabilities. This is an important study in order to build better forgery detectors able to face malicious attacks. Our proposal consists of a GAN-based approach that injects camera traces into synthetic images. Given a GAN-generated image, we insert the traces of a specific camera model into it and deceive state-of-the-art detectors into believing the image was acquired by that model. Likewise, we deceive independent detectors of synthetic GAN images into believing the image is real. Experiments prove the effectiveness of the proposed method in a wide array of conditions. Moreover, no prior information on the attacked detectors is needed, but only sample images from the target camera.
翻译:由于合成媒体生成的快速进步,制作现实假图像变得非常容易。这些图像可以用来包扎“丰富”假消息,提高可信度,引发新一轮高影响、高风险的错误信息运动。因此,人们对受操纵媒体的可靠探测器的兴趣迅速增加。迄今为止,最强大的探测器依靠任何装置在其获得的所有图像上留下的微妙痕迹。特别是,由于在摄像机中的专有过程,如演示或压缩,每个相机模型留下商标痕迹,可以用来进行法医分析。目标图像中这类痕迹的缺失或扭曲是操纵的强烈暗示。在本文中,我们挑战这些探测器,以更好地了解其弱点。这是一个重要的研究,以建立更好的伪造探测器,从而能够面对恶意袭击。我们的建议包括一种基于GAN的方法,将相机的痕迹注入合成图像中。鉴于GAN生成的图像,我们将特定相机模型的痕迹插入其中,并欺骗了用于分析图像的状态-艺术探测器。在通过该模型获得的图像中,是一个强有力的操纵信号。同样,我们质疑这类探测器如何更好地洞察其弱点。我们提议的GAN之前的图像是,一个独立的合成图像的实验性探测器是需要的。