Training effective Generative Adversarial Networks (GANs) requires large amounts of training data, without which the trained models are usually sub-optimal with discriminator over-fitting. Several prior studies address this issue by expanding the distribution of the limited training data via massive and hand-crafted data augmentation. We handle data-limited image generation from a very different perspective. Specifically, we design GenCo, a Generative Co-training network that mitigates the discriminator over-fitting issue by introducing multiple complementary discriminators that provide diverse supervision from multiple distinctive views in training. We instantiate the idea of GenCo in two ways. The first way is Weight-Discrepancy Co-training (WeCo) which co-trains multiple distinctive discriminators by diversifying their parameters. The second way is Data-Discrepancy Co-training (DaCo) which achieves co-training by feeding discriminators with different views of the input images (e.g., different frequency components of the input images). Extensive experiments over multiple benchmarks show that GenCo achieves superior generation with limited training data. In addition, GenCo also complements the augmentation approach with consistent and clear performance gains when combined.
翻译:培训成效强的创造反效果网络(GANs)需要大量的培训数据,没有这些数据,经过培训的模型通常不尽人意,歧视程度过高。前几项研究通过大规模和手工制作的数据增强扩大有限培训数据的分布,从而解决这一问题。我们从非常不同的角度处理数据有限图像生成问题。具体地说,我们设计了GenCo,一个通过引入多种互补的导师来缓解歧视者过度适应问题,这种导师能够提供来自培训中多种不同观点的不同监督。我们以两种方式对GenCo概念进行即时思考。第一种方式是Weight-Disco 联合培训(WeCo),通过使其参数多样化,使多个不同区别的受访者同时进行。第二种方式是数据差异性联合培训(DaCo),通过向对输入图像持有不同观点的受访者提供共同培训(例如投入图像的不同频率组成部分),从而通过向受访者提供不同观点的受访者提供共同培训。关于多个基准的广泛实验表明,Gento以有限培训数据实现较优秀的生成。此外,GenCo还补充了增强方法,同时取得一致和明确的业绩成果。