Recently, there has been an increasing interest in image editing methods that employ pre-trained unconditional image generators (e.g., StyleGAN). However, applying these methods to translate images to multiple visual domains remains challenging. Existing works do not often preserve the domain-invariant part of the image (e.g., the identity in human face translations), they do not usually handle multiple domains, or do not allow for multi-modal translations. This work proposes an implicit style function (ISF) to straightforwardly achieve multi-modal and multi-domain image-to-image translation from pre-trained unconditional generators. The ISF manipulates the semantics of an input latent code to make the image generated from it lying in the desired visual domain. Our results in human face and animal manipulations show significantly improved results over the baselines. Our model enables cost-effective multi-modal unsupervised image-to-image translations at high resolution using pre-trained unconditional GANs. The code and data are available at: \url{https://github.com/yhlleo/stylegan-mmuit}.
翻译:最近,人们越来越关注采用预先训练的无条件图像生成器(例如StyleGAN)的图像编辑方法(例如StyleGAN),然而,将这些方法用于将图像翻译到多个视觉领域仍然具有挑战性。现有的作品并不经常保存图像的域变量部分(例如人脸翻译中的特性),它们通常不会处理多个领域,或者不允许多式翻译。这项工作提议隐含风格功能(ISF),以便直接实现从事先训练的无条件发电机中多式和多式图像到图像翻译。ISF将输入的潜在代码的语义用于将图像投放到理想的视觉领域。我们在人脸和动物操作方面的结果显示在基线上取得了显著的改进。我们的模型能够使用预先训练的无条件GANs,在高分辨率上实现具有成本效益的多式、不受监督的图像到图像翻译。代码和数据见:http://githuub.com/ylleo/stygan-mmuit}。