Domain adaptation of GANs is a problem of fine-tuning the state-of-the-art GAN models (e.g. StyleGAN) pretrained on a large dataset to a specific domain with few samples (e.g. painting faces, sketches, etc.). While there are a great number of methods that tackle this problem in different ways there are still many important questions that remain unanswered. In this paper, we provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model. First, we perform a detailed exploration of the most important parts of StyleGAN that are responsible for adapting the generator to a new domain depending on the similarity between the source and target domains. In particular, we show that affine layers of StyleGAN can be sufficient for fine-tuning to similar domains. Second, inspired by these findings, we investigate StyleSpace to utilize it for domain adaptation. We show that there exist directions in the StyleSpace that can adapt StyleGAN to new domains. Further, we examine these directions and discover their many surprising properties. Finally, we leverage our analysis and findings to deliver practical improvements and applications in such standard tasks as image-to-image translation and cross-domain morphing.
翻译:GANs 的域适应是一个对最新GAN模型(如StyleGAN)进行微调的问题。首先,我们详细探索StyleGAN的最重要部分,这些部分负责根据源和目标领域之间的相似性使生成器适应新的领域。特别是,我们显示StyGAN的紧凑层足以对类似领域进行微调。第二,根据这些发现,我们调查StyleSpace,以便利用它进行域适应。我们显示StymeSpace中有一些方向,可以使StyleGAN适应新的领域。此外,我们研究这些方向,发现它们的许多令人惊讶的特性。最后,我们利用我们的分析与发现,将实际的变迁转化为标准任务。最后,我们利用StyGAN的相近层分析和发现,将实际的图像转化为标准任务。