In this paper, we introduce FairFaceGAN, a fairness-aware facial Image-to-Image translation model, mitigating the problem of unwanted translation in protected attributes (e.g., gender, age, race) during facial attributes editing. Unlike existing models, FairFaceGAN learns fair representations with two separate latents - one related to the target attributes to translate, and the other unrelated to them. This strategy enables FairFaceGAN to separate the information about protected attributes and that of target attributes. It also prevents unwanted translation in protected attributes while target attributes editing. To evaluate the degree of fairness, we perform two types of experiments on CelebA dataset. First, we compare the fairness-aware classification performances when augmenting data by existing image translation methods and FairFaceGAN respectively. Moreover, we propose a new fairness metric, namely Frechet Protected Attribute Distance (FPAD), which measures how well protected attributes are preserved. Experimental results demonstrate that FairFaceGAN shows consistent improvements in terms of fairness over the existing image translation models. Further, we also evaluate image translation performances, where FairFaceGAN shows competitive results, compared to those of existing methods.
翻译:本文介绍FairFaceGAN, 公平、 觉察到面部图像到图像翻译模式, 减轻面部属性编辑过程中受保护属性( 如性别、 年龄、 种族) 的不必要翻译问题。 与现有模型不同, FairFaceGAN 与现有模型不同, 以两种不同的潜值( 一种与要翻译的目标属性有关, 另一种与它们无关), 公平 FairfaceGAN 学习了公平的表达方式。 这个策略使 FairfaceGAN 能够区分受保护属性和目标属性的信息。 它也防止了受保护属性的不必要翻译, 而目标属性编辑。 为了评估公平程度, 我们在CelebA 数据集上进行了两类实验。 首先, 我们比较了利用现有图像翻译方法和 FairfaceGAN 对比现有方法的竞争性结果时的公平性分类性表现。