We propose AnonyGAN, a GAN-based solution for face anonymisation which replaces the visual information corresponding to a source identity with a condition identity provided as any single image. With the goal to maintain the geometric attributes of the source face, i.e., the facial pose and expression, and to promote more natural face generation, we propose to exploit a Bipartite Graph to explicitly model the relations between the facial landmarks of the source identity and the ones of the condition identity through a deep model. We further propose a landmark attention model to relax the manual selection of facial landmarks, allowing the network to weight the landmarks for the best visual naturalness and pose preservation. Finally, to facilitate the appearance learning, we propose a hybrid training strategy to address the challenge caused by the lack of direct pixel-level supervision. We evaluate our method and its variants on two public datasets, CelebA and LFW, in terms of visual naturalness, facial pose preservation and of its impacts on face detection and re-identification. We prove that AnonyGAN significantly outperforms the state-of-the-art methods in terms of visual naturalness, face detection and pose preservation.
翻译:我们建议使用一个双面图,通过一个深层模型,明确模拟源身份面部标志和状况特征的面部标志之间的关系。我们进一步提出一个里程碑式的关注模型,以放松人工选择面部标志,使网络能够对标志进行加权,以取得最佳视觉自然特征和保持面貌。最后,为了便利外观学习,我们提议了一项混合培训战略,以应对因缺乏直接像素水平监督而造成的挑战。我们评估我们的方法及其在两个公共数据集(CelibA和LFW)上的不同变量,即视觉自然特征、面部保护及其对面部检测和再识别的影响。我们证明AnonyGAN在视觉自然特征、表面检测和自然特征保护方面明显超越了状态方法。