Generative Adversarial Networks (GANs) with high computation costs, e.g., BigGAN and StyleGAN2, have achieved remarkable results in synthesizing high resolution and diverse images with high fidelity from random noises. Reducing the computation cost of GANs while keeping generating photo-realistic images is an urgent and challenging field for their broad applications on computational resource-limited devices. In this work, we propose a novel yet simple {\bf D}iscriminator {\bf G}uided {\bf L}earning approach for compressing vanilla {\bf GAN}, dubbed {\bf DGL-GAN}. Motivated by the phenomenon that the teacher discriminator may contain some meaningful information, we transfer the knowledge merely from the teacher discriminator via the adversarial function. We show DGL-GAN is valid since empirically, learning from the teacher discriminator could facilitate the performance of student GANs, verified by extensive experimental findings. Furthermore, we propose a two-stage training strategy for training DGL-GAN, which can largely stabilize its training process and achieve superior performance when we apply DGL-GAN to compress the two most representative large-scale vanilla GANs, i.e., StyleGAN2 and BigGAN. Experiments show that DGL-GAN achieves state-of-the-art (SOTA) results on both StyleGAN2 (FID 2.92 on FFHQ with nearly $1/3$ parameters of StyleGAN2) and BigGAN (IS 93.29 and FID 9.92 on ImageNet with nearly $1/4$ parameters of BigGAN) and also outperforms several existing vanilla GAN compression techniques. Moreover, DGL-GAN is also effective in boosting the performance of original uncompressed GANs, original uncompressed StyleGAN2 boosted with DGL-GAN achieves FID 2.65 on FFHQ, which achieves a new state-of-the-art performance. Code and models are available at \url{https://github.com/yuesongtian/DGL-GAN}.
翻译:以高计算成本生成 Adversarial 网络(GAN) (GAN), 例如 BigGAN 和 StyleGAN2, 在将高分辨率和多种图像与随机噪音的高度忠诚合成方面取得了显著的成果。 降低 GAN 的计算成本, 同时在计算资源有限的装置上保持光真化图像的生成是一个紧迫而具有挑战性的领域。 在这项工作中, 我们提出了一个创新但简单易懂的 GAN1 参数 。 我们提议了一个创新但简单易读的 GAN2 参数, 并被广泛的实验结果所验证。 此外, 我们提议在 Van- GAN2 服务器上实现双级培训战略, GAN2 高级GO2 和 VAN2 高级GAN2 数据 。 我们通过对抗功能将知识仅仅从师级分析器传输给DGLGDODG 。 我们显示DGGGGGGGGG, 从经验角度学习可以促进学生 GANs 的绩效, 并用广泛的实验结果加以验证。 此外, 我们提议在GG- GAN2 OD- GAN 上实现双级培训, 和 VDGAN 级的进度, 可以稳定。