Audio-Video Emotion Recognition is now attacked with Deep Neural Network modeling tools. In published papers, as a rule, the authors show only cases of the superiority in multi-modality over audio-only or video-only modality. However, there are cases superiority in uni-modality can be found. In our research, we hypothesize that for fuzzy categories of emotional events, the within-modal and inter-modal noisy information represented indirectly in the parameters of the modeling neural network impedes better performance in the existing late fusion and end-to-end multi-modal network training strategies. To take advantage and overcome the deficiencies in both solutions, we define a Multi-modal Residual Perceptron Network which performs end-to-end learning from multi-modal network branches, generalizing better multi-modal feature representation. For the proposed Multi-modal Residual Perceptron Network and the novel time augmentation for streaming digital movies, the state-of-art average recognition rate was improved to 91.4% for The Ryerson Audio-Visual Database of Emotional Speech and Song dataset and to 83.15% for Crowd-sourced Emotional multi-modal Actors dataset. Moreover, the Multi-modal Residual Perceptron Network concept shows its potential for multi-modal applications dealing with signal sources not only of optical and acoustical types.
翻译:听觉-视频情感识别现在受到深神经网络建模工具的攻击。在出版的论文中,作者通常只展示多模式优于只音或只视频模式的案例。然而,在单一模式中可以找到优于独式模式的案例。在我们的研究中,我们假设,对于情感事件的模糊类别而言,模拟神经网络参数中间接体现的内现代和现代间噪音信息妨碍了现有晚融合和端至端多式网络培训战略的更好性能。为了利用和克服两种解决方案中的缺陷,我们定义了一个多模式遗留器网络,从多模式网络分支中进行端至端学习,普遍采用更好的多模式特征代表制。关于拟议的多模式遗留物网络以及流动数字电影的新时间增强,仅将最新水平平均识别率提高到91.4%,用于情感语音和声频调多模式多模式网络培训数据库和声频网络数据存储器的缺陷,并用于83-15 移动式网络模式数据库,展示了其潜在版本的多媒体服务器版本版本。