Recent works have shown that a rich set of semantic directions exist in the latent space of Generative Adversarial Networks (GANs), which enables various facial attribute editing applications. However, existing methods may suffer poor attribute variation disentanglement, leading to unwanted change of other attributes when altering the desired one. The semantic directions used by existing methods are at attribute level, which are difficult to model complex attribute correlations, especially in the presence of attribute distribution bias in GAN's training set. In this paper, we propose a novel framework (IALS) that performs Instance-Aware Latent-Space Search to find semantic directions for disentangled attribute editing. The instance information is injected by leveraging the supervision from a set of attribute classifiers evaluated on the input images. We further propose a Disentanglement-Transformation (DT) metric to quantify the attribute transformation and disentanglement efficacy and find the optimal control factor between attribute-level and instance-specific directions based on it. Experimental results on both GAN-generated and real-world images collectively show that our method outperforms state-of-the-art methods proposed recently by a wide margin. Code is available at https://github.com/yxuhan/IALS.
翻译:最近的工作表明,在Genemental Adversarial Networks(GANs)的潜藏空间里,存在着一大批丰富的语义方向,使各种面部属性编辑应用程序得以应用。然而,现有方法可能会受到不良的属性变异分解作用,导致其他属性在改变理想图像时出现不必要的变化。现有方法使用的语义方向处于属性层面,很难模拟复杂的属性相关性,特别是在GAN培训中存在属性分布偏差的情况下。在本文中,我们提议了一个新的框架(IALS),用于进行“试想-Aware Latent-Space”搜索,以寻找解析属性编辑的语义方向。通过利用一组在输入图像上被评估的属性分类器的监管来注入实例信息。我们进一步提议采用分解-变异性(DT)衡量参数变异和分解作用的度,并找到属性等级和实例特定方向之间的最佳控制因子。我们GAN-生成的和真实世界图像的实验结果,共同显示我们的方法在状态-for-u-abth-art/maimal mail Sco 中,这是最近由 http://maildeal Scode 提议的一个宽法。