Audio-driven talking head animation is a challenging research topic with many real-world applications. Recent works have focused on creating photo-realistic 2D animation, while learning different talking or singing styles remains an open problem. In this paper, we present a new method to generate talking head animation with learnable style references. Given a set of style reference frames, our framework can reconstruct 2D talking head animation based on a single input image and an audio stream. Our method first produces facial landmarks motion from the audio stream and constructs the intermediate style patterns from the style reference images. We then feed both outputs into a style-aware image generator to generate the photo-realistic and fidelity 2D animation. In practice, our framework can extract the style information of a specific character and transfer it to any new static image for talking head animation. The intensive experimental results show that our method achieves better results than recent state-of-the-art approaches qualitatively and quantitatively.
翻译:驱动说话头动画的语音是一个具有挑战性的研究主题,具有许多实际应用。最近的工作集中在创建逼真的2D动画,而学习不同的说话或歌唱风格仍然是一个未解决的问题。在本文中,我们提出了一种新方法,通过可学习的风格参考来生成说话头动画。给定一组风格参考帧,我们的框架可以基于单个输入图像和音频流来重构2D说话头动画。我们的方法首先从音频流中产生面部标记运动,并从风格参考图像中构建中间风格模式。然后我们将两个输出输入到风格感知图像生成器中以生成逼真和忠实的2D动画。在实践中,我们的框架可以提取特定角色的风格信息,并将其转移至任何新的静态图像以进行说话头动画。大量的实验结果表明,我们的方法在定性和定量方面均优于最新的最先进方法。