The transfer of facial expressions from people to 3D face models is a classic computer graphics problem. In this paper, we present a novel, learning-based approach to transferring facial expressions and head movements from images and videos to a biomechanical model of the face-head-neck complex. Leveraging the Facial Action Coding System (FACS) as an intermediate representation of the expression space, we train a deep neural network to take in FACS Action Units (AUs) and output suitable facial muscle and jaw activation signals for the musculoskeletal model. Through biomechanical simulation, the activations deform the facial soft tissues, thereby transferring the expression to the model. Our approach has advantages over previous approaches. First, the facial expressions are anatomically consistent as our biomechanical model emulates the relevant anatomy of the face, head, and neck. Second, by training the neural network using data generated from the biomechanical model itself, we eliminate the manual effort of data collection for expression transfer. The success of our approach is demonstrated through experiments involving the transfer onto our face-head-neck model of facial expressions and head poses from a range of facial images and videos.
翻译:将面部表情从人向3D面部模型转移是一个典型的计算机图形问题。 在本文中,我们展示了一种基于学习的新颖方法,将面部表情和头部运动从图像和视频向面部颈部综合体的生物机械模型转移,将脸部颈部颈部颈部颈部颈部颈部颈部颈部颈部颈部颈部颈部部骨骼模型作为中间表达空间的中间表示法。我们培训了一个深神经网络,以在 FACS 行动单位(AUS)中接受,并为肌肉骨骼模型输出出合适的面部肌肉和下巴激活信号。通过生物机械模拟,激活面部软组织骨部组织形部骨部骨部骨部骨部骨部骨部骨部骨部骨部骨部骨部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部颈部骨部骨部骨部部骨部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部部的实验实验。