Generative Adversarial Networks (GANs) have witnessed prevailing success in yielding outstanding images, however, they are burdensome to deploy on resource-constrained devices due to ponderous computational costs and hulking memory usage. Although recent efforts on compressing GANs have acquired remarkable results, they still exist potential model redundancies and can be further compressed. To solve this issue, we propose a novel online multi-granularity distillation (OMGD) scheme to obtain lightweight GANs, which contributes to generating high-fidelity images with low computational demands. We offer the first attempt to popularize single-stage online distillation for GAN-oriented compression, where the progressively promoted teacher generator helps to refine the discriminator-free based student generator. Complementary teacher generators and network layers provide comprehensive and multi-granularity concepts to enhance visual fidelity from diverse dimensions. Experimental results on four benchmark datasets demonstrate that OMGD successes to compress 40x MACs and 82.5X parameters on Pix2Pix and CycleGAN, without loss of image quality. It reveals that OMGD provides a feasible solution for the deployment of real-time image translation on resource-constrained devices. Our code and models are made public at: https://github.com/bytedance/OMGD.
翻译:虽然最近压缩GAN系统的努力取得了显著的成果,但它们仍然存在着潜在的模式冗余,并且可以进一步压缩。为了解决这个问题,我们提议采用一个新的在线多变性蒸馏(OMGD)计划,以获得轻量级GAN系统,这有助于生成具有低计算要求的高纤维性图像。我们首次尝试推广以GAN为导向的压缩单阶段在线蒸馏系统,逐步升级的教师发电机帮助完善基于歧视的无源学生发电机。辅助教师发电机和网络层提供了综合和多级概念,以提高不同层面的视觉忠诚度。四个基准数据集的实验结果显示OMGD成功压缩40xMAC系统,Pix2Pix和CyclyGAN系统82.5X参数,同时不损失图像质量。它揭示了在公共部署/图像翻译方面我们所逐步升级的ODMDM/MLA系统提供了一种可行的解决方案。