The advent of large-scale training has produced a cornucopia of powerful visual recognition models. However, generative models, such as GANs, have traditionally been trained from scratch in an unsupervised manner. Can the collective "knowledge" from a large bank of pretrained vision models be leveraged to improve GAN training? If so, with so many models to choose from, which one(s) should be selected, and in what manner are they most effective? We find that pretrained computer vision models can significantly improve performance when used in an ensemble of discriminators. Notably, the particular subset of selected models greatly affects performance. We propose an effective selection mechanism, by probing the linear separability between real and fake samples in pretrained model embeddings, choosing the most accurate model, and progressively adding it to the discriminator ensemble. Interestingly, our method can improve GAN training in both limited data and large-scale settings. Given only 10k training samples, our FID on LSUN Cat matches the StyleGAN2 trained on 1.6M images. On the full dataset, our method improves FID by 1.5x to 2x on cat, church, and horse categories of LSUN.
翻译:大规模培训的到来产生了一个强大的视觉识别模型的角形小孔膜,然而,基因模型,如GANs, 传统上是未经监督地从零开始训练的。从一个受过训练的模型中集体的“知识”能够用来改进GAN培训吗?如果是这样的话,有这么多模型可以选择,谁应该选择,以什么方式最有效?我们发现,经过预先训练的计算机视觉模型,如果用于一组歧视者的组合中,能够大大改进性能。值得注意的是,特定的一部分选定模型对性能影响很大。我们建议一个有效的选择机制,通过在事先训练的模型嵌入中检验真实和假样品之间的线性分离,选择最准确的模型,并逐步将其添加到歧视者组合中。有趣的是,我们的方法可以改进GAN在有限的数据和大规模环境中的培训。鉴于只有10k个培训样本,我们关于LSUN Cat的FID与在1.6M图像上训练的SystelGAN2相匹配。关于完整的数据设置,我们的方法通过1.5SFID到LS和Lx的Lx,我们的方法改进了1.5SH。