Transfer learning of StyleGAN has recently shown great potential to solve diverse tasks, especially in domain translation. Previous methods utilized a source model by swapping or freezing weights during transfer learning, however, they have limitations on visual quality and controlling source features. In other words, they require additional models that are computationally demanding and have restricted control steps that prevent a smooth transition. In this paper, we propose a new approach to overcome these limitations. Instead of swapping or freezing, we introduce a simple feature matching loss to improve generation quality. In addition, to control the degree of source features, we train a target model with the proposed strategy, FixNoise, to preserve the source features only in a disentangled subspace of a target feature space. Owing to the disentangled feature space, our method can smoothly control the degree of the source features in a single model. Extensive experiments demonstrate that the proposed method can generate more consistent and realistic images than previous works.
翻译:近期,基于 StyleGAN 的迁移学习展示出在不同任务上的巨大潜力,特别是领域转换任务。然而,之前的方法通常通过交换或冻结权重的方式使用源模型进行迁移学习,但这些方法在视觉质量和控制源特征方面都存在局限性。换句话说,它们需要额外的计算模型,具有受限制的控制步骤,防止平滑过渡。在本文中,我们提出了一种新的方法来克服这些局限性。我们引入了一个简单的特征匹配损失函数,而非交换或冻结,从而提高了生成质量。此外,为了控制源特征的程度,我们使用所提出的 FixNoise 策略对目标模型进行训练,只在目标特征空间的一个分离子空间中保留源特征。由于特征空间是分离的,因此我们的方法可以在单个模型内平滑控制源特征的程度。广泛的实验证明,所提出的方法可以生成比之前的方法更加一致和逼真的图像。