Recent works on language-guided image manipulation have shown great power of language in providing rich semantics, especially for face images. However, the other natural information, motions, in language is less explored. In this paper, we leverage the motion information and study a novel task, language-guided face animation, that aims to animate a static face image with the help of languages. To better utilize both semantics and motions from languages, we propose a simple yet effective framework. Specifically, we propose a recurrent motion generator to extract a series of semantic and motion information from the language and feed it along with visual information to a pre-trained StyleGAN to generate high-quality frames. To optimize the proposed framework, three carefully designed loss functions are proposed including a regularization loss to keep the face identity, a path length regularization loss to ensure motion smoothness, and a contrastive loss to enable video synthesis with various language guidance in one single model. Extensive experiments with both qualitative and quantitative evaluations on diverse domains (\textit{e.g.,} human face, anime face, and dog face) demonstrate the superiority of our model in generating high-quality and realistic videos from one still image with the guidance of language. Code will be available at https://github.com/TiankaiHang/language-guided-animation.git.
翻译:最近关于语言指导图像操纵的著作显示语言在提供丰富的语义学、特别是面部图像方面的巨大力量。然而,其他自然信息、动作、语言方面探索较少。在本文中,我们利用运动信息,并研究一种新颖的任务,即语言指导面部动画,目的是在语言的帮助下动画一个静态的面部图像。为了更好地利用语义学和语言运动,我们提议一个简单而有效的框架。具体地说,我们提议一个经常性运动生成器,从语言中提取一系列语义和动作信息,并连同视觉信息一起提供给预先训练过的StyGAN,以生成高质量的框架。为了优化拟议框架,我们提出了三个精心设计的丢失功能,包括:为保持面部特征而进行正规化损失,为确保运动顺利进行路径调整所需的路径损失,以及使视频与各种语言指导在单一模型中进行合成的对比性损失。对不同领域(\ textitit{e.g.}人面、米面和狗面)进行广泛的定性和定量评估,以展示我们模型在生成高品质和高品质/高品质视频时的优势。从一个图像上显示我们制作高品质/高品质/Hdegalmainalli 将获得的模型的模型。一个图像。Dolmamam 将是一个图像。Dolmaualdalmam-dealmamalmam