Image-to-image (I2I) translation has matured in recent years and is able to generate high-quality realistic images. However, despite current success, it still faces important challenges when applied to small domains. Existing methods use transfer learning for I2I translation, but they still require the learning of millions of parameters from scratch. This drawback severely limits its application on small domains. In this paper, we propose a new transfer learning for I2I translation (TransferI2I). We decouple our learning process into the image generation step and the I2I translation step. In the first step we propose two novel techniques: source-target initialization and self-initialization of the adaptor layer. The former finetunes the pretrained generative model (e.g., StyleGAN) on source and target data. The latter allows to initialize all non-pretrained network parameters without the need of any data. These techniques provide a better initialization for the I2I translation step. In addition, we introduce an auxiliary GAN that further facilitates the training of deep I2I systems even from small datasets. In extensive experiments on three datasets, (Animal faces, Birds, and Foods), we show that we outperform existing methods and that mFID improves on several datasets with over 25 points.
翻译:图像到图像翻译( I2I) 近些年来已经成熟,能够生成高质量的现实图像。 然而, 尽管目前取得了成功, 但它在应用小领域时仍面临重大挑战。 现有方法使用I2I翻译的转移学习, 但是它们仍然需要从零开始学习数以百万计的参数。 这种退步严重限制了其在小领域的应用。 在本文中, 我们提议为 I2I 翻译( TransferI2I) 进行新的传输学习。 我们建议为 I2I 翻译( TransferI2I) 提供一个新的转移学习学习学习过程。 我们还在图像生成步骤和 I2I 翻译步骤中分解了我们的学习过程。 在第一步中, 我们提出了两种新的技术: 源目标初始化和自我初始化的适应层。 前一种方法在源和目标数据上微调了预先训练的基因化模型( 例如StyleGAN) 。 后者允许在不需要任何数据的情况下, 初始化所有未受精化的网络参数。 这些技术为 I2I 翻译步骤提供了更好的初始化。 此外, 我们引入了辅助的GAN, 进一步便利从小数据集来培训深I2I2I 系统系统系统系统系统。 在小数据集中, 。 。 在25 上我们展示了三种数据模型上展示了25 。