Image obfuscation (blurring, mosaicing, etc.) is widely used for privacy protection. However, computer vision research often overlooks privacy by assuming access to original unobfuscated images. In this paper, we explore image obfuscation in the ImageNet challenge. Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images, whose privacy is a concern. We first annotate faces in the dataset. Then we investigate how face blurring -- a typical obfuscation technique -- impacts classification accuracy. We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories. Still, the overall accuracy only drops slightly ($\leq 0.68\%$), demonstrating that we can train privacy-aware visual classifiers with minimal impact on accuracy. Further, we experiment with transfer learning to 4 downstream tasks: object recognition, scene recognition, face attribute classification, and object detection. Results show that features learned on face-blurred images are equally transferable. Data and code are available at https://github.com/princetonvisualai/imagenet-face-obfuscation.
翻译:图像模糊(blurring, masaiscation, etc.) 被广泛用于隐私保护。 然而, 计算机视觉研究往往通过假设访问原始未模糊图像而忽略隐私。 在本文中, 我们探索图像网络挑战中的图像模糊。 图像网络的大多数类别不是人类别; 然而, 许多偶然人物在图像中, 隐私是一个问题。 我们首先在数据集中做面部笔记。 然后我们调查面部模糊( 典型的模糊技术) 影响分类精度。 我们将各种深层神经网络以面部模糊图像为基准, 并观察不同类别的不同影响。 但是, 总体精确度仅略微下降( 0. 68 美元 美元 ), 表明我们可以培训对准确性影响最小的隐私认知视觉分类师。 此外, 我们尝试将学习转移到4个下游任务: 对象识别、 现场识别、 脸部属性分类和物体探测。 结果显示, 脸部图像上学习的特征也同样可转让。 数据和代码可以在 https://githfoubb. com/priagementalimonalationalationalationalationalationalationalationation.