Unsupervised style transfer that supports diverse input styles using only one trained generator is a challenging and interesting task in computer vision. This paper proposes a Multi-IlluStrator Style Generative Adversarial Network (MISS GAN) that is a multi-style framework for unsupervised image-to-illustration translation, which can generate styled yet content preserving images. The illustrations dataset is a challenging one since it is comprised of illustrations of seven different illustrators, hence contains diverse styles. Existing methods require to train several generators (as the number of illustrators) to handle the different illustrators' styles, which limits their practical usage, or require to train an image specific network, which ignores the style information provided in other images of the illustrator. MISS GAN is both input image specific and uses the information of other images using only one trained model.
翻译:支持只使用一个经过训练的发电机的不同输入样式的不受监督的风格传输支持不同输入样式,这在计算机视觉中是一项富有挑战性和有趣的任务。本文提议了一个多语言风格风格风格生成对立网络(MISS GAN),这是一个多语言框架,用于未经监督的图像到说明翻译,可以生成风格化但内容保存图像。插图数据集具有挑战性,因为它包含七个不同的插图解解析器的插图,因此包含不同的样式。现有的方法需要培训几个发电机(如插图数)来处理不同的演示师风格,这限制了它们的实际使用,或者需要培训一个特定图像网络,而这种网络忽略了插图解器其他图像中提供的风格信息。 MISS GAN既是输入图像的具体,也是使用其他图像的信息,仅使用一个经过训练的模型。