The adaption of Generative Adversarial Network (GAN) aims to transfer a pre-trained GAN to a given domain with limited training data. In this paper, we focus on the one-shot case, which is more challenging and rarely explored in previous works. We consider that the adaptation from source domain to target domain can be decoupled into two parts: the transfer of global style like texture and color, and the emergence of new entities that do not belong to the source domain. While previous works mainly focus on the style transfer, we propose a novel and concise framework\footnote{\url{https://github.com/thevoidname/Generalized-One-shot-GAN-Adaption}} to address the \textit{generalized one-shot adaption} task for both style and entity transfer, in which a reference image and its binary entity mask are provided. Our core objective is to constrain the gap between the internal distributions of the reference and syntheses by sliced Wasserstein distance. To better achieve it, style fixation is used at first to roughly obtain the exemplary style, and an auxiliary network is introduced to the original generator to disentangle entity and style transfer. Besides, to realize cross-domain correspondence, we propose the variational Laplacian regularization to constrain the smoothness of the adapted generator. Both quantitative and qualitative experiments demonstrate the effectiveness of our method in various scenarios.
翻译:Generation Adversarial Network (GAN) 的调整旨在将一个经过预先训练的GAN 转换到一个有有限培训数据的域。 在本文中,我们侧重于一个单发案例,这个案例更具挑战性,在先前的著作中也很少加以探讨。我们认为,从源域到目标域的调整可以分为两个部分:全球风格如质和颜色的转换,以及不属于源域的新实体的出现。虽然以前的工作主要侧重于风格传输,但我们提议了一个创新和简明的框架\footoot=url{https://github.com/the laidname/Generalized-One-shot-GAN-Adaption ⁇ ),以解决风格和实体转移的文本(textitit{Gnationalizational-commation)调整任务,其中提供了参考图像及其二元实体面具。我们的核心目标是通过切片瓦塞斯坦距离来限制参考文献和合成内容的内部分布和合成内容之间的差距。为了更好地实现这一点,首先使用样式修正,以大致获得结构化的风格格式,然后将我们结构格式转换,然后引入一个辅助网络,以实现格式转换。