Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.
翻译:根据一段文字、一段视频片段和一段参考音频,电影杜比(又称视觉语音克隆V2C)任务旨在生成与视频中发言者的情感匹配的演讲,使用理想的语音声音作为参考。 V2C比常规的文本到语音任务更具挑战性,因为它额外要求生成的语音与视频中不同情感和发言速度完全匹配。与以往的作品不同,我们提议了一部新颖的电影杜比结构,通过等级模拟来解决这些问题,将视觉信息与相应的语音动作连接到三个方面:嘴唇、脸和场景。具体地说,我们把嘴唇运动与发言持续时间相匹配,并通过关注机制传达面部面表达的语音能量和声音。此外,我们设计了一种情感助推器,从全球视频场上捕捉取大气。所有这些嵌入器都用来生成mel-spectrogrogram,然后通过现有的vocoder转换为语音波波。在Chem和V2C基准数据平台上的广泛实验结果将展示出拟议方法的有利性表现。