Deep learning has been widely applied in many computer vision applications, with remarkable success. However, running deep learning models on mobile devices is generally challenging due to the limitation of computing resources. A popular alternative is to use cloud services to run deep learning models to process raw data. This, however, imposes privacy risks. Some prior arts proposed sending the features extracted from raw data to the cloud. Unfortunately, these extracted features can still be exploited by attackers to recover raw images and to infer embedded private attributes. In this paper, we propose an adversarial training framework, DeepObfuscator, which prevents the usage of the features for reconstruction of the raw images and inference of private attributes. This is done while retaining useful information for the intended cloud service. DeepObfuscator includes a learnable obfuscator that is designed to hide privacy-related sensitive information from the features by performing our proposed adversarial training algorithm. The proposed algorithm is designed by simulating the game between an attacker who makes efforts to reconstruct raw image and infer private attributes from the extracted features and a defender who aims to protect user privacy. By deploying the trained obfuscator on the smartphone, features can be locally extracted and then sent to the cloud. Our experiments on CelebA and LFW datasets show that the quality of the reconstructed images from the obfuscated features of the raw image is dramatically decreased from 0.9458 to 0.3175 in terms of multi-scale structural similarity. The person in the reconstructed image, hence, becomes hardly to be re-identified. The classification accuracy of the inferred private attributes that can be achieved by the attacker is significantly reduced to a random-guessing level.
翻译:许多计算机视觉应用中广泛应用了深层次的学习,取得了显著的成功。然而,由于计算机资源的限制,在移动设备上运行深层次的学习模型通常具有挑战性。一个流行的替代办法是使用云服务来运行深层次的学习模型来处理原始数据。然而,这带来了隐私风险。一些先前的艺术提议将原始数据提取的特征发送到云层。不幸的是,这些提取的特征仍然可以被攻击者用来恢复原始图像和推断嵌入的私人属性。在本文中,我们提议了一个对抗性培训框架,即DeepObfuscator,它阻止使用重建原始图像的功能和私人属性的推断。在为预定的云层服务保留有用的信息的同时,也很难做到这一点。DeepObfuscator包括一个可学习的模糊工具,目的是通过执行我们拟议的对抗性训练性培训算法,将隐私的敏感信息隐藏在云层中。 提议的算法是模拟攻击者努力重建原始图像和从提取的私人属性以及保护用户隐私的捍卫者之间的游戏。通过将经过训练的原始图像的透析图层的图层的图层的精度降至智能图像的精度,可以使智能图像的精度的精度在智能的精度上进行。