The goal of face reenactment is to transfer a target expression and head pose to a source face while preserving the source identity. With the popularity of face-related applications, there has been much research on this topic. However, the results of existing methods are still limited to low-resolution and lack photorealism. In this work, we present a one-shot and high-resolution face reenactment method called MegaFR. To be precise, we leverage StyleGAN by using 3DMM-based rendering images and overcome the lack of high-quality video datasets by designing a loss function that works without high-quality videos. Also, we apply iterative refinement to deal with extreme poses and/or expressions. Since the proposed method controls source images through 3DMM parameters, we can explicitly manipulate source images. We apply MegaFR to various applications such as face frontalization, eye in-painting, and talking head generation. Experimental results show that our method successfully disentangles identity from expression and head pose, and outperforms conventional methods.
翻译:面部重现的目的是在保存源身份的同时将目标表达式和头部面部向源面转移,同时将目标表达式和头部显示到源面。 由于与面部有关的应用程序很受欢迎,因此对这个主题进行了大量研究。 但是,现有方法的结果仍然局限于低分辨率和缺乏光现实主义。 在这项工作中,我们展示了一种一发高分辨率的面部重现方法,叫做MegaFR。 准确地说, 我们利用StyGAN, 使用基于 3DMM 的图像来利用StyleGAN, 并通过设计一个没有高质量视频功能的丢失功能来克服高品质的视频数据集的缺失。 另外, 我们用迭接式改进来处理极端的外形和/ 或表达式。 由于拟议的方法控制源图像的参数为 3DMMM, 我们可以明确操作源图像。 我们将MegaFR用于面部化、 眼睛和说话头部生成等各种应用。 实验结果显示, 我们的方法成功地分解了表达式和头部的特性, 并且超越常规方法。