Image manipulation and forgery detection have been a topic of research for more than a decade now. New-age tools and large-scale social platforms have given space for manipulated media to thrive. These media can be potentially dangerous and thus innumerable methods have been designed and tested to prove their robustness in detecting forgery. However, the results reported by state-of-the-art systems indicate that supervised approaches achieve almost perfect performance but only with particular datasets. In this work, we analyze the issue of out-of-distribution generalisability of the current state-of-the-art image forgery detection techniques through several experiments. Our study focuses on models that utilise handcrafted features for image forgery detection. We show that the developed methods fail to perform well on cross-dataset evaluations and in-the-wild manipulated media. As a consequence, a question is raised about the current evaluation and overestimated performance of the systems under consideration. Note: This work was done during a summer research internship at ITMR Lab, IIIT-Allahabad under the supervision of Prof. Anupam Agarwal.
翻译:10多年来,图像操纵和伪造检测一直是一项研究的主题。新时代工具和大型社交平台为操纵媒体提供了发展空间。这些媒体可能具有潜在危险,因此设计并测试了无数方法,以证明其在检测伪造方面的健全性。然而,最先进的系统报告的结果显示,监督方法几乎达到完美性能,但只有特定数据集才能做到这一点。在这项工作中,我们通过若干实验分析了当前最新图像伪造检测技术在传播上的普遍性。我们的研究侧重于利用手工制作特征进行图像伪造检测的模型。我们显示,开发的方法在交叉数据集评估和在网上操纵媒体方面未能很好地发挥作用。结果,有人对当前评估提出疑问,并高估了所考虑的系统绩效。注意:这项工作是在Anupam Agarwal教授监督下在Allahabad的ITMR实验室(IIIT-Allahabad)的一个暑期研究实习期间完成的。