Synthetic image generation has opened up new opportunities but has also created threats in regard to privacy, authenticity, and security. Detecting fake images is of paramount importance to prevent illegal activities, and previous research has shown that generative models leave unique patterns in their synthetic images that can be exploited to detect them. However, the fundamental problem of generalization remains, as even state-of-the-art detectors encounter difficulty when facing generators never seen during training. To assess the generalizability and robustness of synthetic image detectors in the face of real-world impairments, this paper presents a large-scale dataset named ArtiFact, comprising diverse generators, object categories, and real-world challenges. Moreover, the proposed multi-class classification scheme, combined with a filter stride reduction strategy addresses social platform impairments and effectively detects synthetic images from both seen and unseen generators. The proposed solution outperforms other teams by 8.34% on Test 1, 1.26% on Test 2, and 15.08% on Test 3 in the IEEE VIP CUP at ICIP 2022.
翻译:合成图像的生成带来了新的机遇,但也在隐私、真实性和安全方面造成了威胁。检测假图像对于防止非法活动至关重要,此前的研究显示,基因模型在其合成图像中留下了独特的模式,可以加以利用以探测这些图象。然而,普遍性的根本问题仍然存在,因为即使是最先进的检测器在面对培训期间从未见过的发电机时也会遇到困难。为了评估合成图像探测器在面对现实世界缺陷时的一般性和稳健性,本文件展示了一个名为ArtiFact(ArtiFact)的大规模数据集,包括各种发电机、对象类别和现实世界的挑战。此外,拟议的多级分类计划,加上过滤式减少结构战略,处理社会平台的缺陷,并有效检测从可见和看不见的发电机中产生的合成图像。提议的解决方案在试验1,1.26%的测试2,8.34%,和15.08%的测试3,在ICIP, ICIP, IEEEVI Vient CUP, 2022。