Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generating high-quality images. However, this gain in performance depends on the availability of a large amount of training data. In limited data regimes, training typically diverges, and therefore the generated samples are of low quality and lack diversity. Previous works have addressed training in low data setting by leveraging transfer learning and data augmentation techniques. We propose a novel transfer learning method for GANs in the limited data domain by leveraging informative data prior derived from self-supervised/supervised pre-trained networks trained on a diverse source domain. We perform experiments on several standard vision datasets using various GAN architectures (BigGAN, SNGAN, StyleGAN2) to demonstrate that the proposed method effectively transfers knowledge to domains with few target images, outperforming existing state-of-the-art techniques in terms of image quality and diversity. We also show the utility of data instance prior in large-scale unconditional image generation.
翻译:基因对抗网络(GANs)最近的进展表明,在生成高质量图像方面取得了显著进展,然而,这一绩效的提高取决于能否获得大量培训数据。在有限的数据制度中,培训通常各不相同,因此所生成的样本质量低,缺乏多样性。以前的工作涉及利用转让学习和数据增强技术进行低数据设置培训。我们提议在有限的数据领域为GAN提供一种新的转让学习方法,方法是利用以前从自我监督/监督的、经过培训的、经过培训的、在多种来源领域接受培训的网络获得的信息数据。我们利用各种GAN结构(BigGAN、SNGAN、StyleGAN2)对若干标准视觉数据集进行了实验,以证明拟议的方法有效地将知识转让给目标图像少、在图像质量和多样性方面优于现有最新技术的域。我们还展示了在大规模无条件生成图像方面以往数据实例的效用。