Generating talking person portraits with arbitrary speech audio is a crucial problem in the field of digital human and metaverse. A modern talking face generation method is expected to achieve the goals of generalized audio-lip synchronization, good video quality, and high system efficiency. Recently, neural radiance field (NeRF) has become a popular rendering technique in this field since it could achieve high-fidelity and 3D-consistent talking face generation with a few-minute-long training video. However, there still exist several challenges for NeRF-based methods: 1) as for the lip synchronization, it is hard to generate a long facial motion sequence of high temporal consistency and audio-lip accuracy; 2) as for the video quality, due to the limited data used to train the renderer, it is vulnerable to out-of-domain input condition and produce bad rendering results occasionally; 3) as for the system efficiency, the slow training and inference speed of the vanilla NeRF severely obstruct its usage in real-world applications. In this paper, we propose GeneFace++ to handle these challenges by 1) utilizing the pitch contour as an auxiliary feature and introducing a temporal loss in the facial motion prediction process; 2) proposing a landmark locally linear embedding method to regulate the outliers in the predicted motion sequence to avoid robustness issues; 3) designing a computationally efficient NeRF-based motion-to-video renderer to achieves fast training and real-time inference. With these settings, GeneFace++ becomes the first NeRF-based method that achieves stable and real-time talking face generation with generalized audio-lip synchronization. Extensive experiments show that our method outperforms state-of-the-art baselines in terms of subjective and objective evaluation. Video samples are available at https://genefaceplusplus.github.io .
翻译:利用任意语音视频生成说话人形象是数字人和元宇宙领域的关键问题。现代化的说话人生成方法应该实现广义的音频唇形同步、良好的视频质量和高系统效率。最近,神经辐射场(NeRF)已成为该领域中流行的渲染技术,因为它可以在几分钟的培训视频中实现高保真度和三维一致的说话人生成。然而,NeRF方法仍然存在几个挑战:1)对于唇同步,很难生成高时空一致性和音频唇形准确性的长面部运动序列;2)对于视频质量,由于渲染器用于训练的数据有限,它易受到域外输入条件的影响而偶尔产生坏的渲染结果;3)对于系统效率,香草NeRF的慢训练和推理速度严重阻碍其在实际应用中的使用。在本文中,我们通过以下途径解决这些挑战:1)利用音高轮廓作为辅助特征,在面部运动预测过程中引入时间损失;2)提出一个地标局部线性嵌入方法,以调节预测运动序列中的异常值,避免鲁棒性问题;3)设计一个计算高效的基于NeRF的运动到视频渲染器,以实现快速训练和实时推理。有了这些设置,GeneFace ++ 成为第一个实现稳定和实时说话人生成的NeRF方法,具有广义的音频唇形同步。大量实验表明,我们的方法在主观和客观评估方面优于现有的基准。视频样本可在 https://genefaceplusplus.github.io 上获得。