The adaptation of a Generative Adversarial Network (GAN) aims to transfer a pre-trained GAN to a target domain with limited training data. In this paper, we focus on the one-shot case, which is more challenging and rarely explored in previous works. We consider that the adaptation from a source domain to a target domain can be decoupled into two parts: the transfer of global style like texture and color, and the emergence of new entities that do not belong to the source domain. While previous works mainly focus on style transfer, we propose a novel and concise framework to address the \textit{generalized one-shot adaptation} task for both style and entity transfer, in which a reference image and its binary entity mask are provided. Our core idea is to constrain the gap between the internal distributions of the reference and syntheses by sliced Wasserstein distance. To better achieve it, style fixation is used at first to roughly obtain the exemplary style, and an auxiliary network is introduced to the generator to disentangle entity and style transfer. Besides, to realize cross-domain correspondence, we propose the variational Laplacian regularization to constrain the smoothness of the adapted generator. Both quantitative and qualitative experiments demonstrate the effectiveness of our method in various scenarios. Code is available at \url{https://github.com/zhangzc21/Generalized-One-shot-GAN-adaptation}.
翻译:基因反转网络(GAN)的改造旨在将经过预先训练的GAN转换到培训数据有限的目标领域。 在本文中,我们侧重于一个单发案例,这个案例更具挑战性,在先前的著作中很少加以探讨。我们认为,从源域到目标域的调整可以分为两个部分:全球风格如质质和颜色的转换,以及不属于源域的新实体的出现。虽然以前的工作主要侧重于样式转让,但我们提出了一个新颖和简洁的框架,以解决样式和实体转让的任务,即提供参考图像及其二元实体面具。我们的核心想法是通过切片瓦塞尔斯坦距离来限制参考和合成内容的内部分布差距。为了更好地实现这一点,首先使用样式固定,以大致获得模范风格,然后向发电机引入一个辅助网络,以拆解实体和风格转让。此外,为了实现跨部通信,我们提议在可变式的ARCLA+G/GG 中演示我们可用的变式标准格式化方法。