Though GAN (Generative Adversarial Networks) based technique has greatly advanced the performance of image synthesis and face translation, only few works available in literature provide region based style encoding and translation. We propose in this paper a region-wise normalization framework, for region level face translation. While per-region style is encoded using available approach, we build a so called RIN (region-wise normalization) block to individually inject the styles into per-region feature maps and then fuse them for following convolution and upsampling. Both shape and texture of different regions can thus be translated to various target styles. A region matching loss has also been proposed to significantly reduce the inference between regions during the translation process. Extensive experiments on three publicly available datasets, i.e. Morph, RaFD and CelebAMask-HQ, suggest that our approach demonstrate a large improvement over state-of-the-art methods like StarGAN, SEAN and FUNIT. Our approach has further advantages in precise control of the regions to be translated. As a result, region level expression changes and step by step make up can be achieved. The video demo is available at https://youtu.be/ceRqsbzXAfk.
翻译:虽然GAN(General Adversarial Networks)基于GAN(General Adversarial Networks)的技术极大地提高了图像合成和面版翻译的性能,但文献中只有很少的作品提供了基于区域风格的编码和翻译。我们在本文件中提议了一个区域层面的正常化框架,用于区域层面的面版翻译。虽然每个区域风格都使用可用的方法编码,但我们建造了一个所谓的RAN(区域层面的正常化)块,将这些样式单独输入每个区域地貌地图,然后将其结合到变化和翻版之后。因此,不同区域的形状和纹理都可以被转换为不同的目标风格。还提议了一个区域匹配损失,以大幅降低翻译过程中各区域之间的推论。对三种公开的数据集,即Morph、RaFD和CerebAMask-HQ进行了广泛的实验,这表明,我们的方法对StarGAN、SEAN和FUNIT等最新方法有很大的改进。我们的方法在准确控制各区域方面有着进一步的好处。作为结果,区域层面的表达方式和一步一步进行修改和步骤进行推介A。可以实现。