Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.
翻译:深心神经网络在面部识别领域取得了前所未有的成功,使得任何个人都可以在未经明确允许的情况下从互联网上爬取他人的数据,用于培训高精度面部识别模型,从而造成对隐私的严重侵犯。最近,一个众所周知的名为Fawkes(公布于USENIX Security 2020)的系统(在USENIX Security 2020中公布)声称,可以通过上传隐蔽用户图像而不是原始图像来消除这一隐私威胁。在本文中,我们介绍了Oriole,这是一个将数据中毒袭击和规避袭击的优势结合起来的系统,目的是挫败Fawkes提供的保护,用Oriole公司生成的多孔图像培训攻击者脸部识别模型。因此,攻击模型的面部识别准确性和Fawkes的弱点被暴露。实验结果表明,我们提议的Oriole系统能够有效地干扰Fawkes系统的运行,从而取得有希望的攻击结果。我们提出的反动反应研究强调了影响Oriole系统运行的多种主要因素,包括DSSIM perbturbation 预算, 每一个用户设计模型的清晰度比例, 将讨论越清晰的用户图像的图像的清晰度,我们将找出的图像的图像的清晰度,我们如何确定。