As the privacy risks posed by camera surveillance and facial recognition have grown, so has the research into privacy preservation algorithms. Among these, visual privacy preservation algorithms attempt to impart bodily privacy to subjects in visuals by obfuscating privacy-sensitive areas. While disparate performances of facial recognition systems across phenotypes are the subject of much study, its counterpart, privacy preservation, is not commonly analysed from a fairness perspective. In this paper, the fairness of commonly used visual privacy preservation algorithms is investigated through the performances of facial recognition models on obfuscated images. Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
翻译:随着摄影机监视和面部识别带来的隐私风险增加,对隐私保护算法的研究也增加了。其中,视觉隐私保护算法试图通过混淆隐私敏感区,将身体隐私传授给视觉中的主题。虽然对相形色色的面部识别系统的不同性能进行了大量研究,但通常没有从公平角度分析其对应的隐私保护。在本文中,通过对模糊图像进行面部识别模型的演化,对常用的视觉隐私保护算法的公平性进行了调查。 PubFig数据集的实验清楚地表明,所提供的隐私保护在群体之间是不平等的。