Advances in face synthesis have raised alarms about the deceptive use of synthetic faces. Can synthetic identities be effectively used to fool human observers? In this paper, we introduce a study of the human perception of synthetic faces generated using different strategies including a state-of-the-art deep learning-based GAN model. This is the first rigorous study of the effectiveness of synthetic face generation techniques grounded in experimental techniques from psychology. We answer important questions such as how often do GAN-based and more traditional image processing-based techniques confuse human observers, and are there subtle cues within a synthetic face image that cause humans to perceive it as a fake without having to search for obvious clues? To answer these questions, we conducted a series of large-scale crowdsourced behavioral experiments with different sources of face imagery. Results show that humans are unable to distinguish synthetic faces from real faces under several different circumstances. This finding has serious implications for many different applications where face images are presented to human users.
翻译:面部合成的进步使人们对合成面孔的欺骗性使用产生了警钟。 合成身份能否被有效地用于欺骗人类观察者? 在本文中,我们引入了对人类对合成面孔的认知的研究, 使用不同的战略, 包括最先进的深层次学习GAN模型。 这是第一次对基于心理学实验技术的合成面孔生成技术的有效性进行严格研究。 我们回答了一些重要的问题, 比如基于GAN的和较传统的图像处理技术如何经常混淆人类观察者, 合成面孔中是否有微妙的线索, 导致人类在不必寻找明显线索的情况下将合成面孔视为虚假的? 为了回答这些问题, 我们用不同的面孔图像源进行了一系列大型的众源行为实验。 结果显示,在几种不同的情况下,人类无法将合成面孔与真实面孔区分开来。 这一发现对许多不同的应用程序产生了严重影响,这些应用程序向人类用户展示了面孔图像。