Synthetic image generation has opened up new opportunities but has also created threats in regard to privacy, authenticity, and security. Detecting fake images is of paramount importance to prevent illegal activities, and previous research has shown that generative models leave unique patterns in their synthetic images that can be exploited to detect them. However, the fundamental problem of generalization remains, as even state-of-the-art detectors encounter difficulty when facing generators never seen during training. To assess the generalizability and robustness of synthetic image detectors in the face of real-world impairments, this paper presents a large-scale dataset named ArtiFact, comprising diverse generators, object categories, and real-world challenges. Moreover, the proposed multi-class classification scheme, combined with a filter stride reduction strategy addresses social platform impairments and effectively detects synthetic images from both seen and unseen generators. The proposed solution significantly outperforms other top teams by 8.34% on Test 1, 1.26% on Test 2, and 15.08% on Test 3 in the IEEE VIP Cup challenge at ICIP 2022, as measured by the accuracy metric.
翻译:合成图像的生成带来了新的机遇,但也在隐私、真实性和安全方面造成了威胁。检测假图像对于防止非法活动至关重要,而先前的研究显示,基因模型在其合成图像中留下了独特的模式,可以加以利用以探测这些图象。然而,普遍性的根本问题仍然存在,因为即使是最先进的检测器在面对培训期间从未见过的发电机时也会遇到困难。为了评估合成图像探测器在面对现实世界缺陷时的一般性和稳健性,本文件展示了一个大型数据集,名为ArtiFact, 包括各种生成器、对象类别和现实世界的挑战。此外,拟议的多级分类计划,加上过滤式减轨战略,解决了社会平台的缺陷,并有效地检测了从可见和看不见的发电机中产生的合成图像。根据精确度测量,拟议的解决方案大大优于其他顶级团队,在测试2试验试验的试验1.26%和2022年IEEE VIP Vic挑战的测试3, 测试10.08%。</s>