Privacy protection is an important research area, which is especially critical in this big data era. To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relation provides a great amount of information and can be used in other scenarios. In this paper, we propose the mapping distortion based protection (MDP) and its augmentation-based extension (AugMDP) to protect the data privacy by modifying the original dataset. In the modified dataset generated by MDP, the image and its label are not consistent ($e.g.$, a cat-like image is labeled as the dog), whereas the DNNs trained on it can still achieve good performance on benign testing set. As such, this method can protect privacy when the dataset is leaked. Extensive experiments are conducted, which verify the effectiveness and feasibility of our method. The code for reproducing main results is available at \url{https://github.com/PerdonLiu/Visual-Privacy-Protection-via-Mapping-Distortion}.
翻译:隐私保护是一个重要的研究领域,在这个大数据时代尤为关键。 在很大程度上,视觉分类数据的隐私主要存在于图像及其相应标签之间的映射中,因为这一关系提供了大量信息,并可用于其他情景。 在本文中,我们提议通过修改原始数据集来保护数据隐私的映射扭曲保护(MDP)及其扩增扩展(AugMDP),以通过修改原始数据集来保护数据隐私。在由MDP产生的修改数据集中,图像及其标签不一致(例如,像猫一样的图像被贴上狗的标签),而受过培训的DNN仍然可以在良性测试集上取得良好的性能。因此,这种方法可以在数据集泄漏时保护隐私。进行了广泛的实验,以核查我们方法的有效性和可行性。 复制主要结果的代码可在以下网站查阅:<url{https://github.com/PerdonLiu/Visual-Privacy-Protection-vicy-Mapping-Discrition-Dricionion}。