This paper studies a novel privacy-preserving anonymization problem for pedestrian images, which preserves personal identity information (PII) for authorized models and prevents PII from being recognized by third parties. Conventional anonymization methods unavoidably cause semantic information loss, leading to limited data utility. Besides, existing learned anonymization techniques, while retaining various identity-irrelevant utilities, will change the pedestrian identity, and thus are unsuitable for training robust re-identification models. To explore the privacy-utility trade-off for pedestrian images, we propose a joint learning reversible anonymization framework, which can reversibly generate full-body anonymous images with little performance drop on person re-identification tasks. The core idea is that we adopt desensitized images generated by conventional methods as the initial privacy-preserving supervision and jointly train an anonymization encoder with a recovery decoder and an identity-invariant model. We further propose a progressive training strategy to improve the performance, which iteratively upgrades the initial anonymization supervision. Experiments further demonstrate the effectiveness of our anonymized pedestrian images for privacy protection, which boosts the re-identification performance while preserving privacy. Code is available at \url{https://github.com/whuzjw/privacy-reid}.
翻译:本文研究行人图像的新隐私保存匿名问题,为经授权的模型保留个人身份信息,防止第三方承认 PII。 常规匿名方法不难避免地造成语义信息丢失,导致数据效用有限。 此外,现有的已学匿名技术,在保留各种身份相关公用事业的同时,将改变行人身份,因此不适于培训强有力的再识别模型。 为了探索行人图像的隐私使用权取舍,我们提议了一个联合学习可逆匿名框架,该框架可以以可逆的方式生成全体匿名图像,在个人再识别任务上几乎没有性能下降。核心思想是,我们采用传统方法产生的不敏感图像作为初始隐私保护监督,并联合培训一个有恢复解码和身份变异模型的匿名编码编码器。我们进一步提议一个渐进式培训战略来改进行人图像的性能,从而迭代更新初始匿名监管。 实验进一步展示了我们匿名行人匿名/通信图像的效能,同时保护隐私。