We propose a simple yet powerful Landmark guided Generative Adversarial Network (LandmarkGAN) for the facial expression-to-expression translation using a single image, which is an important and challenging task in computer vision since the expression-to-expression translation is a non-linear and non-aligned problem. Moreover, it requires a high-level semantic understanding between the input and output images since the objects in images can have arbitrary poses, sizes, locations, backgrounds, and self-occlusions. To tackle this problem, we propose utilizing facial landmark information explicitly. Since it is a challenging problem, we split it into two sub-tasks, (i) category-guided landmark generation, and (ii) landmark-guided expression-to-expression translation. Two sub-tasks are trained in an end-to-end fashion that aims to enjoy the mutually improved benefits from the generated landmarks and expressions. Compared with current keypoint-guided approaches, the proposed LandmarkGAN only needs a single facial image to generate various expressions. Extensive experimental results on four public datasets demonstrate that the proposed LandmarkGAN achieves better results compared with state-of-the-art approaches only using a single image. The code is available at https://github.com/Ha0Tang/LandmarkGAN.
翻译:我们提出一个简单而强大的Landmark 制导 General Adversarial 网络(LandmarkGAN),用于使用单一图像进行面部表达式对表达式翻译,这是计算机视觉中一项重要而具有挑战性的任务,因为表达式对表达式翻译是一个非线性和非线性的问题。此外,它要求在输入和输出图像之间有一个高层次的语义理解,因为图像中的对象可以任意构成、大小、位置、背景和自我封闭。为了解决这一问题,我们建议明确使用面部标志性信息。由于这是一个具有挑战性的问题,我们建议将其分为两个子任务,即:一个子任务(一)类引导里程碑式生成,以及(二)里程碑式引导表达式表达式翻译是一个非线性和非线性的问题。它需要两个子任务以端到端的方式对输入和输出图像进行训练,目的是享受生成的里程碑和表达式的相互改进的惠益。与当前关键点指导方法相比,拟议的LmarkGAN 仅仅需要一张面部图像来产生各种表达式的图像。在四个公共数据集上的广泛实验结果显示,在四种公共数据设置中,使用一个标记/MAmark-GAN 的单一图像中,只有一个可用的数字/MAGAN 和图像,只有一个更好的数字。