Emotion recognition is an important research field for Human-Computer Interaction(HCI). Audio-Video Emotion Recognition (AVER) is now attacked with Deep Neural Network (DNN) modeling tools. In published papers, as a rule, the authors show only cases of the superiority of multi modalities over audio-only or video-only modalities. However, there are cases superiority in single modality can be found. In our research, we hypothesize that for fuzzy categories of emotional events, the higher noise of one modality can amplify the lower noise of the second modality represented indirectly in the parameters of the modeling neural network. To avoid such cross-modal information interference we define a multi-modal Residual Perceptron Network (MRPN) which learns from multi-modal network branches creating deep feature representation with reduced noise. For the proposed MRPN model and the novel time augmentation for streamed digital movies, the state-of-art average recognition rate was improved to 91.4% for The Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS) dataset and to 83.15% for Crowd-sourced Emotional multi-modal Actors Dataset(Crema-d). Moreover, the MRPN concept shows its potential for multi-modal classifiers dealing with signal sources not only of optical and acoustical type.
翻译:情感感知是人类-计算机互动(HCI)的一个重要研究领域。现在,音频-视频情感识别(AVER)受到深神经网络(DNN)建模工具的攻击。在出版的论文中,作者通常只展示多种模式优于音独或视频模式的案例。然而,在单一模式中也能找到优于单一模式的案例。在我们的研究中,我们假设对于模糊的情感事件类别而言,一种模式的较高噪音可以放大在模拟神经网络参数中间接体现的第二种模式的低噪音。为了避免这种跨模式信息干扰,我们定义了多模式遗留物网络网络,从多模式网络分支中学习,创造出深度特征,减少噪音。对于拟议的MRPN模式和流动数字电影的新时间增强,最先进的平均识别率提高到91.4%,用于情感-视觉话音频和Song(RAVDESS)数据库,而对于C-MISM-MISMAM 的信号源,仅用于C-MISM-MISM-MLADML 和MISM-MISMLADMLADMOL 的代号数据库的潜在数据-C-C-C-C-C-C-MISMIS-MIS-C-MISMISMISMAL-C-C-C-C-C-C-C-C-C-MISMIS-MISMLs-MISMLss-MOLs-MOLs-C-MIOLs-C-C-C-C-C-C-MOLssssssss-MLssssmal-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-SDsmal-C-C-C-C-C-C-C-C-C-C-C-C-C-C-MISMLsmals-Ms-Is-s-s-Is-S-S-S-MLs-MLs-S-S-MLs-Is-MLs-C-C-M