Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.
翻译:情感表达方式是通过口头和非口头交流表达的。复杂的人类行为可以通过研究多种方式的物理特征来理解;主要是面部、声音和身体动作。最近,为人类行为分析,对自发的多模式情感认识进行了广泛研究。在本文中,我们提出了一种新的深层次的基于学习的视听情感认知方法。我们的方法利用了诸如知识蒸馏和高性能深层结构等深层次学习的最新进展。视听模式的深层特征表达方式以模型级融合战略为基础。然后使用经常性的神经网络来捕捉时间动态。我们所提议的方法在预测RECOLA数据集的价值方面大大优于最新的方法。此外,我们提议的视觉面部表达特征提取网络比AfffectNet和Google Facial表达比较数据集的状态结果要好。