Domain adaptation of GANs is a problem of fine-tuning the state-of-the-art GAN models (e.g. StyleGAN) pretrained on a large dataset to a specific domain with few samples (e.g. painting faces, sketches, etc.). While there are a great number of methods that tackle this problem in different ways, there are still many important questions that remain unanswered. In this paper, we provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model. First, we perform a detailed exploration of the most important parts of StyleGAN that are responsible for adapting the generator to a new domain depending on the similarity between the source and target domains. As a result of this in-depth study, we propose new efficient and lightweight parameterizations of StyleGAN for domain adaptation. Particularly, we show there exist directions in StyleSpace (StyleDomain directions) that are sufficient for adapting to similar domains and they can be reduced further. For dissimilar domains, we propose Affine$+$ and AffineLight$+$ parameterizations that allows us to outperform existing baselines in few-shot adaptation with low data regime. Finally, we examine StyleDomain directions and discover their many surprising properties that we apply for domain mixing and cross-domain image morphing.
翻译:GAN的领域自适应是通过微调预训练在大型数据集上的最先进的GAN模型(例如StyleGAN),以适应使用少量样本的特定领域(例如绘画人脸、素描等)的问题。虽然存在许多以不同方式解决这个问题的方法,但仍有许多重要问题仍然没有答案。在本文中,我们对GAN的领域自适应问题进行了系统而深入的分析,重点研究了StyleGAN模型。首先,我们对最重要的StyleGAN部分进行了详细的探索,这些部分负责根据源和目标域之间的相似性调整生成器来适应新领域。在这个深入研究的结果中,我们提出了StyleGAN的新的高效轻量级参数化方法以进行领域自适应。特别地,我们展示了存在于StyleSpace中的StyleDomain方向,它们足以适应于相似的领域,而且它们可以进一步减少。对于不相似的领域,我们提出了Affine$+$ 和 AffineLight$+$参数化,该方法能够在少量数据上胜过现有的基线。最后我们对StyleDomain方向进行了深入的研究发现了它们许多令人惊讶的性质,并将它们应用于领域混合和跨领域图像变形。