Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly during analysis or implicitly during training. At the same time, deep learning has enabled new forms of anti-forensic attacks, such as adversarial examples and generative adversarial network (GAN) based attacks. Thus far, however, no anti-forensic attack has been demonstrated against image splicing detection and localization algorithms. In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms such as EXIF-Net, Noiseprint, and Forensic Similarity Graphs. This attack operates by adversarially training an anti-forensic generator against a set of Siamese neural networks so that it is able to create synthetic forensic traces. Under analysis, these synthetic traces appear authentic and are self-consistent throughout an image. Through a series of experiments, we demonstrate that our attack is capable of fooling forensic splicing detection and localization algorithms without introducing visually detectable artifacts into an attacked image. Additionally, we demonstrate that our attack outperforms existing alternative attack approaches. %
翻译:最近深层学习的进展使法证研究人员能够开发出一种新的一类图像复制检测和地方化算法。这些算法通过在分析期间或培训期间明确或隐含地发现使用Siamese神经网络的法证痕迹中的地方性不一致,来识别有样的内容。与此同时,深层次的学习也促成了新型的反法医攻击形式,例如对抗性例子和基于基因的对抗性对抗网络(GAN)攻击。然而,迄今为止,还没有针对图像复制检测和本地化算法的反法医攻击得到证明。在本文中,我们提议一种新的基于GAN的反法医攻击,这种攻击能够愚弄最先进的分层探测和本地化算法,例如EXIF-Net、噪音指纹和法证相似性图。这一攻击是通过对抗性训练一种反法医发电机来对付一组Siamsemye神经网络,以便能够创建合成的法证痕迹。在分析中,这些合成痕迹看起来真实,并且在整个图像中都是自相一致的。通过一系列实验,我们可以愚弄的分辨方法,我们演示了现有的法医攻击方法。