As an important and challenging problem, few-shot image generation aims at generating realistic images through training a GAN model given few samples. A typical solution for few-shot generation is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain. In this paper, we propose a novel self-supervised transfer scheme termed D3T-GAN, addressing the cross-domain GANs transfer in few-shot image generation. Specifically, we design two individual strategies to transfer knowledge between generators and discriminators, respectively. To transfer knowledge between generators, we conduct a data-dependent transformation, which projects and reconstructs the target samples into the source generator space. Then, we perform knowledge transfer from transformed samples to generated samples. To transfer knowledge between discriminators, we design a multi-level discriminant knowledge distillation from the source discriminator to the target discriminator on both the real and fake samples. Extensive experiments show that our method improve the quality of generated images and achieves the state-of-the-art FID scores on commonly used datasets.
翻译:作为一个重要和具有挑战性的问题,少发图像生成的目的是通过培训一个GAN模型来产生现实的图像,给少数样本提供一些样本。少发图像生成的一个典型解决办法是将一个训练有素的GAN模型从数据丰富的源域向数据缺失的目标域传输。在本文中,我们提出了一个名为D3T-GAN的自监管传输新颖计划,解决在几发图像生成过程中跨域GANs传输的问题。具体地说,我们设计了两种分别在生成者与分析者之间转让知识的单独战略。为了在生成者与分析者之间转让知识,我们进行了数据依赖型转换,将目标样本投放入源生成空间。然后,我们从变异的样本向生成样本传输知识。为了在分析者之间传递知识,我们设计了一种多层次的辨别性知识蒸馏方法,从源歧视者向目标对象的真样和假样生成的样本中进行筛选。广泛的实验表明,我们的方法提高了生成图像的质量,并在常用的数据集上取得最先进的FID分数。