Face obfuscation (blurring, mosaicing, etc.) has been shown to be effective for privacy protection; nevertheless, object recognition research typically assumes access to complete, unobfuscated images. In this paper, we explore the effects of face obfuscation on the popular ImageNet challenge visual recognition benchmark. Most categories in the ImageNet challenge are not people categories; however, many incidental people appear in the images, and their privacy is a concern. We first annotate faces in the dataset. Then we demonstrate that face blurring -- a typical obfuscation technique -- has minimal impact on the accuracy of recognition models. Concretely, we benchmark multiple deep neural networks on face-blurred images and observe that the overall recognition accuracy drops only slightly (no more than 0.68%). Further, we experiment with transfer learning to 4 downstream tasks (object recognition, scene recognition, face attribute classification, and object detection) and show that features learned on face-blurred images are equally transferable. Our work demonstrates the feasibility of privacy-aware visual recognition, improves the highly-used ImageNet challenge benchmark, and suggests an important path for future visual datasets. Data and code are available at https://github.com/princetonvisualai/imagenet-face-obfuscation.
翻译:脸部模糊化( blurring, masaicling, etc.) 被证明对隐私保护有效; 然而, 对象识别研究通常假定可以访问完整、 未模糊的图像。 在本文中, 我们探索了流行图像网络挑战视觉识别基准上的脸模糊化的影响。 图像网的大多数挑战类别不是人类别; 然而, 许多偶然人物出现在图像中, 他们的隐私是一个关注问题。 我们首先在数据集中做笔记。 然后我们证明, 脸部模糊化 -- -- 一种典型的模糊化技术 -- -- 对识别模型的准确性影响最小。 具体地说, 我们将多个深线性神经网络以面部模糊化图像为基准, 并观察总体识别准确度仅略微下降( 0. 68 % 以上 ) 。 此外, 我们实验了向下游任务( 目标识别、 场景识别、 脸部属性分类和对象检测) 的学习特征是同样可转让的。 我们的工作展示了隐私认知识别识别识别可行性, 改进了高清晰度图像网络/ 图像网络 挑战性基准 。 数据/ 显示未来数据/ 重要路径 。