Recent progress in image-to-image translation has witnessed the success of generative adversarial networks (GANs). However, GANs usually contain a huge number of parameters, which lead to intolerant memory and computation consumption and limit their deployment on edge devices. To address this issue, knowledge distillation is proposed to transfer the knowledge from a cumbersome teacher model to an efficient student model. However, most previous knowledge distillation methods are designed for image classification and lead to limited performance in image-to-image translation. In this paper, we propose Region-aware Knowledge Distillation ReKo to compress image-to-image translation models. Firstly, ReKo adaptively finds the crucial regions in the images with an attention module. Then, patch-wise contrastive learning is adopted to maximize the mutual information between students and teachers in these crucial regions. Experiments with eight comparison methods on nine datasets demonstrate the substantial effectiveness of ReKo on both paired and unpaired image-to-image translation. For instance, our 7.08X compressed and 6.80X accelerated CycleGAN student outperforms its teacher by 1.33 and 1.04 FID scores on Horse to Zebra and Zebra to Horse, respectively. Codes will be released on GitHub.
翻译:图像到图像翻译(GANs)最近取得了成功。然而,GANs通常包含大量参数,导致不宽容的记忆和计算消费,并限制其在边缘装置上的部署。为了解决这一问题,建议知识蒸馏将知识从一个繁琐的教师模式转移至一个高效的学生模式。然而,大多数先前的知识蒸馏方法是为了图像分类而设计的,导致图像到图像翻译的绩效有限。在本文中,我们提议区域觉悟知识蒸馏ReKo 以压缩图像到图像翻译模型。首先,ReKo适应性地发现图像中的关键区域,并使用关注模块。然后,采用对称式对比学习,以尽量扩大这些关键区域的学生和教师之间的相互信息。在九个数据集上用八种比较方法进行的实验表明ReKo在配对和非配对图像到图像翻译方面的巨大效力。例如,我们的7.08X 压缩和6.80X CycroGAN学生加速图像翻译模式将分别在1.33和1.04 HIS-Zasyral 和1.04 HIS-Zasyral-Zas-SAS-Ox-ODIS-OMIS-ODRAx-O-Oxxxxx-Zxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx