This work proposes a novel method to generate realistic talking head videos using audio and visual streams. We animate a source image by transferring head motion from a driving video using a dense motion field generated using learnable keypoints. We improve the quality of lip sync using audio as an additional input, helping the network to attend to the mouth region. We use additional priors using face segmentation and face mesh to improve the structure of the reconstructed faces. Finally, we improve the visual quality of the generations by incorporating a carefully designed identity-aware generator module. The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details. Our method produces state-of-the-art results and generalizes well to unseen faces, languages, and voices. We comprehensively evaluate our approach using multiple metrics and outperforming the current techniques both qualitative and quantitatively. Our work opens up several applications, including enabling low bandwidth video calls. We release a demo video and additional information at http://cvit.iiit.ac.in/research/projects/cvit-projects/avfr.
翻译:这项工作提出了一种新颖的方法, 利用视听流生成现实的谈话头部视频。 我们用使用可学习的键点生成的密集运动场将驱动视频头部运动从驱动视频中传输出一个源图像。 我们用音频作为补充输入, 帮助网络关注口腔区域, 提高嘴唇同步的质量。 我们使用面部分割和面部网格来提高重建面部的结构。 最后, 我们采用精心设计的识别生成模块, 提高世代的视觉质量 。 身份识别生成器将源图像和扭曲的动作功能作为输入, 以生成含有精细细细节的高质量输出 。 我们的方法产生最新艺术结果, 并全面评价我们的方法, 使用多种计量, 并超越当前质量和数量技术。 我们的工作开启了几个应用程序, 包括允许低带宽视频呼叫 。 我们在 http://cvit. iii.ac. in/ research/cvit- productions/avfr.