Transfer learning of StyleGAN has recently shown great potential to solve diverse tasks, especially in domain translation. Previous methods utilized a source model by swapping or freezing weights during transfer learning, however, they have limitations on visual quality and controlling source features. In other words, they require additional models that are computationally demanding and have restricted control steps that prevent a smooth transition. In this paper, we propose a new approach to overcome these limitations. Instead of swapping or freezing, we introduce a simple feature matching loss to improve generation quality. In addition, to control the degree of source features, we train a target model with the proposed strategy, FixNoise, to preserve the source features only in a disentangled subspace of a target feature space. Owing to the disentangled feature space, our method can smoothly control the degree of the source features in a single model. Extensive experiments demonstrate that the proposed method can generate more consistent and realistic images than previous works.
翻译:StyleGAN 的传输学习最近显示出了解决不同任务的巨大潜力,特别是在域翻译方面。以前的方法在传输学习期间通过交换或冻结重量使用源模式,但它们在视觉质量和控制源特性方面有局限性。换句话说,它们需要额外的模型,这些模型在计算上要求很高,控制步骤也有限,从而阻碍了平稳过渡。在本文中,我们提出了克服这些限制的新办法。我们不是转换或冻结,而是采用一种简单的特征匹配损失以提高生成质量。此外,为了控制源特性的程度,我们还用拟议的战略“固定噪音”来培训一个目标模型,以便将源特性保存在目标特征空间一个分解的子空间中。由于特征空间的分解,我们的方法可以顺利控制单一模型中源特性的程度。广泛的实验表明,拟议的方法能够产生比以往工作更加一致和现实的图像。