The task of few-shot visual dubbing focuses on synchronizing the lip movements with arbitrary speech input for any talking head video. Albeit moderate improvements in current approaches, they commonly require high-quality homologous data sources of videos and audios, thus causing the failure to leverage heterogeneous data sufficiently. In practice, it may be intractable to collect the perfect homologous data in some cases, for example, audio-corrupted or picture-blurry videos. To explore this kind of data and support high-fidelity few-shot visual dubbing, in this paper, we novelly propose a simple yet efficient two-stage framework with a higher flexibility of mining heterogeneous data. Specifically, our two-stage paradigm employs facial landmarks as intermediate prior of latent representations and disentangles the lip movements prediction from the core task of realistic talking head generation. By this means, our method makes it possible to independently utilize the training corpus for two-stage sub-networks using more available heterogeneous data easily acquired. Besides, thanks to the disentanglement, our framework allows a further fine-tuning for a given talking head, thereby leading to better speaker-identity preserving in the final synthesized results. Moreover, the proposed method can also transfer appearance features from others to the target speaker. Extensive experimental results demonstrate the superiority of our proposed method in generating highly realistic videos synchronized with the speech over the state-of-the-art.
翻译:微小的视觉插图的任务侧重于将嘴唇运动与任意的语音输入同步,用于任何谈话头部视频。尽管目前的方法略有改进,但通常需要高品质的同质视频和音频数据源,从而无法充分利用不同数据。在实践中,在某些情况下收集完美的同质数据可能是棘手的,例如,音频插图或图片模糊图象视频。为了探索这类数据,支持任何谈话头部视频的任意语音输入,我们在本文件中提出了一个简单而高效的两阶段框架,并具有更高的采矿多样性数据灵活性。具体地说,我们的两阶段模式将面部标志作为潜在展示的中间前期,将嘴部运动预测与现实的谈话头部制作的核心任务脱钩。通过这种方法,我们有可能利用更容易获得的更易得的混杂数据独立地利用两阶段子网络的训练材料。此外,由于混乱,我们的框架允许对一个演讲头进行进一步的两阶段框架进行进一步的微调,从而导致更符合现实的演讲者形象,从而在最后的图像中以更现实的方式保存了我们提议的高度同步的图像。