Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse. The networks focus on the core of the data distribution, leaving the tails - or the edges of the distribution - behind. We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core. Building on this observation, we outline a method for mitigating generative bias through a self-conditioning process, where distances in the latent-space of a pre-trained generator are used to provide initial labels for the data. By fine-tuning the generator on a re-sampled distribution drawn from these self-labeled data, we force the generator to better contend with rare semantic attributes and enable more realistic generation of these properties. We compare our models to a wide range of latent editing methods, and show that by alleviating the bias they achieve finer semantic control and better identity preservation through a wider range of transformations. Our code and models will be available at https://github.com/yzliu567/sc-gan
翻译:从不平衡的数据或模式崩溃中学习的Adversarial 网络(GANs)容易出现偏差,从不平衡的数据中或通过模式崩溃而获得。这些网络侧重于数据分布的核心,留下数据分布的尾部或边缘。我们争辩说,这种偏差不仅对公平问题负有责任,而且对潜在三角编辑方法的崩溃起着关键作用,因为它偏离了分布的核心。我们从这一观察出发,勾勒了一种通过自我调节过程来减少基因偏差的方法,在这个过程中,预先训练的发电机的潜层的距离被用来为数据提供初始标签。通过微调从这些自标数据中提取的再版分布,我们迫使生成者更好地与稀有的语义属性进行争斗,并使得这些属性能够更现实地生成。我们把我们的模型与广泛的潜在编辑方法进行了比较,并表明,通过减轻偏差,它们实现了精细的语管和更好的身份保护,从而通过更广泛的转换范围。我们的代码和模型将在 http://girubs./yz/ comsangs) 上提供。