Face frontalization consists of synthesizing a frontally-viewed face from an arbitrarily-viewed one. The main contribution of this paper is a frontalization methodology that preserves non-rigid facial deformations in order to boost the performance of visually assisted speech communication. The method alternates between the estimation of (i)~the rigid transformation (scale, rotation, and translation) and (ii)~the non-rigid deformation between an arbitrarily-viewed face and a face model. The method has two important merits: it can deal with non-Gaussian errors in the data and it incorporates a dynamical face deformation model. For that purpose, we use the generalized Student t-distribution in combination with a linear dynamic system in order to account for both rigid head motions and time-varying facial deformations caused by speech production. We propose to use the zero-mean normalized cross-correlation (ZNCC) score to evaluate the ability of the method to preserve facial expressions. The method is thoroughly evaluated and compared with several state of the art methods, either based on traditional geometric models or on deep learning. Moreover, we show that the method, when incorporated into deep learning pipelines, namely lip reading and speech enhancement, improves word recognition and speech intelligibilty scores by a considerable margin. Supplemental material is accessible at https://team.inria.fr/robotlearn/research/facefrontalization-benchmark/
翻译:正面面部的变形包括从任意观看的图像中合成前视脸部的组合。 本文的主要贡献是保存非硬面部变形的正面化方法, 以提高视觉辅助语音通信的性能。 估计 (一) 硬质变形( 缩放、 旋转、 翻译) 和 (二) 之间的替代方法。 该方法有两个重要优点: 该方法可以处理数据中非Gaussian的表面错误, 并包含一个动态面部变形模型。 为此, 我们使用普通学生的T分发与线性动态系统相结合, 以便既考虑到硬头部运动,又考虑到由语音制作造成的时间变化。 我们提议使用零度的正常交叉关系( ZNCC) 评分来评价保持面部面部表达的能力。 该方法可以进行彻底评估, 与几种艺术方法的状态进行比较, 要么基于传统的地理测量模型, 要么在深层面部/ 面部的语音变形模型中学习。 此外, 我们展示了该方法, 在深度的阅读时, 将 学习 。