In this work, we propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, our framework, in its first stage, generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. We propose a novel attributes encoder for extracting multi-level target face attributes, and a new generator with carefully designed Adaptive Attentional Denormalization (AAD) layers to adaptively integrate the identity and the attributes for face synthesis. To address the challenging facial occlusions, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net). It is trained to recover anomaly regions in a self-supervised way without any manual annotations. Extensive experiments on wild faces demonstrate that our face swapping results are not only considerably more perceptually appealing, but also better identity preserving in comparison to other state-of-the-art methods.
翻译:在这项工作中,我们提出了一个名为FaceShifter的新颖的两阶段框架,用于高忠诚度和高包容意识面部互换。与许多现有的面部互换工程不同,这些工程在综合被互换面部时只利用目标图像中有限的信息,而我们的框架在第一阶段则通过彻底和适应性地利用和整合目标属性,产生高度不忠的面部互换。我们提出了用于提取多层次目标面部属性的新型属性编码器,以及一个设计周密的适应性注意力分散层(AAAAD)的新生成器,以适应性地整合身份和面部合成属性。为了应对挑战性面部排斥,我们附加了由新颖的Heurist错误识别精度网(HEAR-Net)组成的第二阶段。它受过训练,可以在没有手动说明的情况下以自我监督的方式恢复异常区域。关于野面部的实验表明,我们的脸面互换结果不仅在概念上更具吸引力,而且与其他状态方法相比,更能保护身份。