In this paper, we propose a novel framework to translate a portrait photo-face into an anime appearance. Our aim is to synthesize anime-faces which are style-consistent with a given reference anime-face. However, unlike typical translation tasks, such anime-face translation is challenging due to complex variations of appearances among anime-faces. Existing methods often fail to transfer the styles of reference anime-faces, or introduce noticeable artifacts/distortions in the local shapes of their generated faces. We propose Ani- GAN, a novel GAN-based translator that synthesizes highquality anime-faces. Specifically, a new generator architecture is proposed to simultaneously transfer color/texture styles and transform local facial shapes into anime-like counterparts based on the style of a reference anime-face, while preserving the global structure of the source photoface. We propose a double-branch discriminator to learn both domain-specific distributions and domain-shared distributions, helping generate visually pleasing anime-faces and effectively mitigate artifacts. Extensive experiments qualitatively and quantitatively demonstrate the superiority of our method over state-of-the-art methods.
翻译:在本文中,我们提出了一个将肖像照片面转变为动画外观的新框架。 我们的目标是合成与某种参考动画面相一致的风格风格式动画面。 然而,与典型的翻译任务不同,这种动画面的翻译由于动画面外观的复杂变化而具有挑战性。 现有方法往往不能转换动画面的参考风格,或在当地面貌形状中引入明显的文物/扭曲。 我们提议了Ani-GAN,这是一家小说家GAN翻译,它综合了高质量的动画面。 具体地说,提出了一个新的生成器结构,以同时传输颜色/纹面观风格,并根据动画面的风格,将地方面部面形形形形转换为类似动画面的对应结构,同时保护源面的全球结构。 我们提议了一种双层歧视器,以学习特定域分布和共享域分布,帮助产生视觉美观并有效减少艺术品。 大规模实验从质量上和数量上展示了我们方法超越状态的优越性。