We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
翻译:我们描述了一种针对基因对抗网络的新的培训方法。 关键的想法是逐步增加生成者和歧视者:从低分辨率开始,我们增加新的层次,随着培训的进展,这种层次的模型越来越精细。这既加快了培训速度,又大大稳定了培训速度,使我们能够制作出质量空前的图像,如CelebA1024/2的图像。 我们还提出了一个简单的方法来增加生成图像的变异,并在未受监督的CIFAR10中实现8.80的创纪录的开始得分。 此外,我们描述了对阻止生成者与歧视者之间的不健康竞争十分重要的若干执行细节。 最后,我们建议了一个新的衡量GAN结果的尺度,既包括图像质量,也包括变异性。作为额外的贡献,我们构建了一个质量更高的CelebA数据集版本。