Facial expression manipulation aims at editing facial expression with a given condition. Previous methods edit an input image under the guidance of a discrete emotion label or absolute condition (e.g., facial action units) to possess the desired expression. However, these methods either suffer from changing condition-irrelevant regions or are inefficient for fine-grained editing. In this study, we take these two objectives into consideration and propose a novel method. First, we replace continuous absolute condition with relative condition, specifically, relative action units. With relative action units, the generator learns to only transform regions of interest which are specified by non-zero-valued relative AUs. Second, our generator is built on U-Net but strengthened by Multi-Scale Feature Fusion (MSF) mechanism for high-quality expression editing purposes. Extensive experiments on both quantitative and qualitative evaluation demonstrate the improvements of our proposed approach compared to the state-of-the-art expression editing methods. Code is available at \url{https://github.com/junleen/Expression-manipulator}.
翻译:面部表达式操控旨在根据特定条件编辑面部表达式。 先前的方法是在离散情绪标签或绝对条件( 如面部动作单位)的指导下编辑输入图像, 以拥有想要的表达式。 然而, 这些方法要么是条件变化不相干区域, 要么对微微微微缩化编辑无效。 在本研究中, 我们考虑到这两个目标, 并提出了一个新颖的方法 。 首先, 我们用相对条件, 具体来说, 相对动作单位来取代持续的绝对条件 。 使用相对动作单位, 生成器只学会改变非零值相对 AU 指定的感兴趣区域 。 其次, 我们的生成器建在 U- Net 上, 但由于用于高质量表达式编辑目的的多波谱化( MSF) 机制而得到加强 。 在定量和定性评估上进行的广泛实验, 表明我们所提议的方法相对于状态- 表达式编辑方法的改进。 代码可在\url{https://github. com/ junureen/ Express- manpulator} 上查阅 。