Automatically generating videos in which synthesized speech is synchronized with lip movements in a talking head has great potential in many human-computer interaction scenarios. In this paper, we present an automatic method to generate synchronized speech and talking-head videos on the basis of text and a single face image of an arbitrary person as input. In contrast to previous text-driven talking head generation methods, which can only synthesize the voice of a specific person, the proposed method is capable of synthesizing speech for any person that is inaccessible in the training stage. Specifically, the proposed method decomposes the generation of synchronized speech and talking head videos into two stages, i.e., a text-to-speech (TTS) stage and a speech-driven talking head generation stage. The proposed TTS module is a face-conditioned multi-speaker TTS model that gets the speaker identity information from face images instead of speech, which allows us to synthesize a personalized voice on the basis of the input face image. To generate the talking head videos from the face images, a facial landmark-based method that can predict both lip movements and head rotations is proposed. Extensive experiments demonstrate that the proposed method is able to generate synchronized speech and talking head videos for arbitrary persons and non-persons. Synthesized speech shows consistency with the given face regarding to the synthesized voice's timbre and one's appearance in the image, and the proposed landmark-based talking head method outperforms the state-of-the-art landmark-based method on generating natural talking head videos.
翻译:自动制作视频,使合成的语音与说话头部的嘴唇运动同步。 在很多人-计算机互动情景中,拟议方法具有巨大的潜力。 在本文中,我们展示了一种自动方法,在文本和任意人士的单一面相图像的基础上生成同步的语音和说话头部视频作为输入。 与以往的文本驱动的谈话头部生成方法相比,该方法只能将特定个人的声音与说话头部的嘴唇运动同步进行。 与以往的文本驱动的谈话头部生成方法相比,拟议方法能够将任何在培训阶段无法进入的任何人的个人化语音合成在一起。 具体地说,拟议方法将同步的语音和说话头部视频分为两个阶段,即文本对语音的语音和头部视频阶段,以及由语言驱动的谈话头部生成的头部生成的语音移动和头部。 广度的TTTTS模块是一个有面部的多功能的多功能 TTS模型模型, 使我们能够在输入基于输入面部图像的面部图像上合成头部图像, 并用真实的方式生成了一张头部的表层图像。