Recent advancements in language-image models have led to the development of highly realistic images that can be generated from textual descriptions. However, the increased visual quality of these generated images poses a potential threat to the field of media forensics. This paper aims to investigate the level of challenge that language-image generation models pose to media forensics. To achieve this, we propose a new approach that leverages the DALL-E2 language-image model to automatically generate and splice masked regions guided by a text prompt. To ensure the creation of realistic manipulations, we have designed an annotation platform with human checking to verify reasonable text prompts. This approach has resulted in the creation of a new image dataset called AutoSplice, containing 5,894 manipulated and authentic images. Specifically, we have generated a total of 3,621 images by locally or globally manipulating real-world image-caption pairs, which we believe will provide a valuable resource for developing generalized detection methods in this area. The dataset is evaluated under two media forensic tasks: forgery detection and localization. Our extensive experiments show that most media forensic models struggle to detect the AutoSplice dataset as an unseen manipulation. However, when fine-tuned models are used, they exhibit improved performance in both tasks.
翻译:近年来,语言-图像模型的发展已经推动了高度逼真的图像生成,可以从文本描述中生成。然而,这些生成图像的视觉质量的提高对媒体取证领域构成潜在威胁。本文旨在调查语言-图像生成模型对媒体取证领域的挑战水平。为实现这一目标,我们提出了一种新方法,利用DALL-E2语言-图像模型自动生成和拼接由文本提示引导的遮罩区域。为了确保创建真实的操作,我们设计了一个注释平台,用于验证合理的文本提示。这种方法产生了一个新的图像数据集,称为AutoSplice,包含5,894张操作和真实图像。具体来说,我们通过局部或全局操作真实的图像-标题对生成了3,621张图像,我们认为这将为该领域开发广义检测方法提供宝贵资源。该数据集在两个媒体取证任务:伪造检测和定位下进行评估。我们的广泛实验表明,大多数媒体取证模型难以将AutoSplice数据集检测为未见过的操作。然而,当使用微调模型时,它们在两个任务中表现出更好的性能。