With the development of online artificial intelligence systems, many deep neural networks (DNNs) have been deployed in cloud environments. In practical applications, developers or users need to provide their private data to DNNs, such as faces. However, data transmitted and stored in the cloud is insecure and at risk of privacy leakage. In this work, inspired by Type-I adversarial attack, we propose an adversarial attack-based method to protect visual privacy of data. Specifically, the method encrypts the visual information of private data while maintaining them correctly predicted by DNNs, without modifying the model parameters. The empirical results on face recognition tasks show that the proposed method can deeply hide the visual information in face images and hardly affect the accuracy of the recognition models. In addition, we further extend the method to classification tasks and also achieve state-of-the-art performance.
翻译:随着在线人工智能系统的开发,在云层环境中部署了许多深神经网络(DNN),在实际应用中,开发者或用户需要向DNN提供其私人数据,如脸孔等。然而,云中传输和储存的数据不安全,有隐私泄漏的危险。在这项工作中,在一型对抗性攻击的启发下,我们提出了一个基于对抗性攻击的办法来保护数据的视觉隐私。具体地说,该方法将私人数据的视觉信息加密,同时保持DNN正确预测,而不修改模型参数。面对面识别任务的经验结果显示,拟议方法可以深入将视觉信息隐藏在脸部图像中,而不会影响识别模型的准确性。此外,我们进一步将这种方法扩大到分类任务,并实现最先进的性能。