The remarkable success of face recognition (FR) has endangered the privacy of internet users particularly in social media. Recently, researchers turned to use adversarial examples as a countermeasure. In this paper, we assess the effectiveness of using two widely known adversarial methods (BIM and ILLC) for de-identifying personal images. We discovered, unlike previous claims in the literature, that it is not easy to get a high protection success rate (suppressing identification rate) with imperceptible adversarial perturbation to the human visual system. Finally, we found out that the transferability of adversarial examples is highly affected by the training parameters of the network with which they are generated.
翻译:面部识别(FR)的显著成功危及了互联网用户的隐私,特别是在社交媒体中的隐私。最近,研究人员转而使用对抗性实例作为反措施。在本文中,我们评估了使用两种广为人知的对抗性方法(BIM和ILLC)来辨别个人图像的有效性。我们发现,与以前在文献中的说法不同,获得高保护性成功率(压抑识别率)并对人类视觉系统进行不可察觉的对抗性侵扰并非易事。 最后,我们发现,对抗性实例的可转移性受到产生这些例子的网络的培训参数的极大影响。