Data-Efficient GANs (DE-GANs), which aim to learn generative models with a limited amount of training data, encounter several challenges for generating high-quality samples. Since data augmentation strategies have largely alleviated the training instability, how to further improve the generative performance of DE-GANs becomes a hotspot. Recently, contrastive learning has shown the great potential of increasing the synthesis quality of DE-GANs, yet related principles are not well explored. In this paper, we revisit and compare different contrastive learning strategies in DE-GANs, and identify (i) the current bottleneck of generative performance is the discontinuity of latent space; (ii) compared to other contrastive learning strategies, Instance-perturbation works towards latent space continuity, which brings the major improvement to DE-GANs. Based on these observations, we propose FakeCLR, which only applies contrastive learning on perturbed fake samples, and devises three related training techniques: Noise-related Latent Augmentation, Diversity-aware Queue, and Forgetting Factor of Queue. Our experimental results manifest the new state of the arts on both few-shot generation and limited-data generation. On multiple datasets, FakeCLR acquires more than 15% FID improvement compared to existing DE-GANs. Code is available at https://github.com/iceli1007/FakeCLR.
翻译:由于数据增强战略在很大程度上缓解了培训不稳定性,如何进一步提高DE-GANs的基因性能成为热点。最近,对比式的学习表明,提高DE-GANs合成质量有很大潜力,但相关原则没有得到很好探讨。在本文中,我们重新审视并比较DE-GANs中不同的对比学习战略,并查明(一) 基因性能目前的瓶颈是潜质空间的不连续性;(二) 与其他对比性学习战略相比,实例渗透性工作有助于潜质空间的连续性,从而给DE-GANs带来重大改进。基于这些观察,我们建议FakeCLR,它只对四周的假样品进行对比性学习,并设计三种相关的培训技术:与Nise有关的Lentet-Augation,多样性-觉悟,以及忘记了目前潜质空间的不连续性;(二) 与其他对比性学习战略相比,实例-渗透性工作使DE-GANs成为潜在的空间连续性,这给DE-GANs带来重大的改进。我们提议FACLR只是将现有的新状态转化为数据。