Domain adaptation framework of GANs has achieved great progress in recent years as a main successful approach of training contemporary GANs in the case of very limited training data. In this work, we significantly improve this framework by proposing an extremely compact parameter space for fine-tuning the generator. We introduce a novel domain-modulation technique that allows to optimize only 6 thousand-dimensional vector instead of 30 million weights of StyleGAN2 to adapt to a target domain. We apply this parameterization to the state-of-art domain adaptation methods and show that it has almost the same expressiveness as the full parameter space. Additionally, we propose a new regularization loss that considerably enhances the diversity of the fine-tuned generator. Inspired by the reduction in the size of the optimizing parameter space we consider the problem of multi-domain adaptation of GANs, i.e. setting when the same model can adapt to several domains depending on the input query. We propose the HyperDomainNet that is a hypernetwork that predicts our parameterization given the target domain. We empirically confirm that it can successfully learn a number of domains at once and may even generalize to unseen domains. Source code can be found at https://github.com/MACderRu/HyperDomainNet
翻译:近些年来,GANs 的域适应框架取得了巨大进展,这是在培训当代GANs时在培训非常有限的培训数据方面取得成功的主要方法。在这项工作中,我们大幅改进这一框架,为微调生成器提出了极为紧凑的参数空间。我们引入了一种新的域调制技术,允许优化仅6 000维矢量,而不是3 000万个StyleGAN2的重量,以适应目标领域。我们将这一参数化参数化应用于最先进的域适应方法,并表明它几乎具有与整个参数空间相同的清晰度。此外,我们提出了新的规范化损失,大大加强了微调生成器的多样性。由于优化参数空间的缩小,我们考虑的是GANs的多维适应问题,也就是说,根据输入查询确定同一模型何时可以适应几个领域。我们提议了超多域网,它是一个超网络,可以预测我们的目标领域的参数化。我们从经验上确认它能够成功一次学习若干个域,甚至可以概括到看不见域域。