Face obfuscation (blurring, mosaicing, etc.) has been shown to be effective for privacy protection; nevertheless, object recognition research typically assumes access to complete, unobfuscated images. In this paper, we explore the effects of face obfuscation on the popular ImageNet challenge visual recognition benchmark. Most categories in the ImageNet challenge are not people categories; however, many incidental people appear in the images, and their privacy is a concern. We first annotate faces in the dataset. Then we demonstrate that face obfuscation has minimal impact on the accuracy of recognition models. Concretely, we benchmark multiple deep neural networks on obfuscated images and observe that the overall recognition accuracy drops only slightly (<= 1.0%). Further, we experiment with transfer learning to 4 downstream tasks (object recognition, scene recognition, face attribute classification, and object detection) and show that features learned on obfuscated images are equally transferable. Our work demonstrates the feasibility of privacy-aware visual recognition, improves the highly-used ImageNet challenge benchmark, and suggests an important path for future visual datasets. Data and code are available at https://github.com/princetonvisualai/imagenet-face-obfuscation.
翻译:脸部模糊( blurring, masaicling, etc.) 被证明对隐私保护有效; 然而, 对象识别研究通常假定可以访问完整、 未模糊的图像。 在本文中, 我们探索了流行的图像网络挑战视觉识别基准上的脸模糊效应的影响。 图像网络中的大多数类别都不是人类别; 然而, 许多偶然人物出现在图像中, 他们的隐私也是一种关注。 我们首先在数据集中做个注释。 然后我们证明, 面对模糊对识别模型的准确性影响最小。 具体地说, 我们把多个深层神经网络以模糊的图像为基准, 并观察总体识别准确度仅略微下降( 1.0 % ) 。 此外, 我们实验了将学习转移到4个下游任务( 目标识别、 场景识别、 脸属性分类和对象探测), 并显示在模糊图像上学习的特征同样可转让。 我们的工作展示了隐私认知识别的可行性, 改进了高用途图像网络挑战基准, 并为未来视觉数据/ http/ 地图 提供重要路径 。 数据 。 数据 和 。