As a key component of talking face generation, lip movements generation determines the naturalness and coherence of the generated talking face video. Prior literature mainly focuses on speech-to-lip generation while there is a paucity in text-to-lip (T2L) generation. T2L is a challenging task and existing end-to-end works depend on the attention mechanism and autoregressive (AR) decoding manner. However, the AR decoding manner generates current lip frame conditioned on frames generated previously, which inherently hinders the inference speed, and also has a detrimental effect on the quality of generated lip frames due to error propagation. This encourages the research of parallel T2L generation. In this work, we propose a parallel decoding model for fast and high-fidelity text-to-lip generation (ParaLip). Specifically, we predict the duration of the encoded linguistic features and model the target lip frames conditioned on the encoded linguistic features with their duration in a non-autoregressive manner. Furthermore, we incorporate the structural similarity index loss and adversarial learning to improve perceptual quality of generated lip frames and alleviate the blurry prediction problem. Extensive experiments conducted on GRID and TCD-TIMIT datasets demonstrate the superiority of proposed methods. Video samples are available via \url{https://paralip.github.io/}.
翻译:作为谈话面部生成的关键组成部分,嘴唇运动的生成决定了生成面部视频的自然性和一致性。 先前的文献主要侧重于语音到翻版生成, 而文本到翻版( T2L) 的生成却缺乏。 T2L 是一项艰巨的任务, 而现有的端到端工作取决于关注机制和自动递归解方式。 然而, AR 解码方式生成了以先前生成的框架为条件的当前唇框, 这必然会妨碍推断速度, 并对生成的唇框的质量产生有害影响, 因为错误的传播。 这鼓励了对平行的 T2L 生成的研究。 在这项工作中, 我们为快速和高真知源文本到翻版生成( ParaLip) 提出了一个平行的解码模型。 具体地说, 我们预测了加密语言特征和标语框架的长度, 其持续时间以非偏向性的方式制约着语言特征。 此外, 我们纳入了结构相似性指数损失和对生成的唇框质量的对抗性学习, 从而通过生成的T2LMISMLMLML 进行模拟测试, 并减轻GIFMIS 的模型的模拟分析。