Emotion recognition from speech is a challenging task. Re-cent advances in deep learning have led bi-directional recur-rent neural network (Bi-RNN) and attention mechanism as astandard method for speech emotion recognition, extractingand attending multi-modal features - audio and text, and thenfusing them for downstream emotion classification tasks. Inthis paper, we propose a simple yet efficient neural networkarchitecture to exploit both acoustic and lexical informationfrom speech. The proposed framework using multi-scale con-volutional layers (MSCNN) to obtain both audio and text hid-den representations. Then, a statistical pooling unit (SPU)is used to further extract the features in each modality. Be-sides, an attention module can be built on top of the MSCNN-SPU (audio) and MSCNN (text) to further improve the perfor-mance. Extensive experiments show that the proposed modeloutperforms previous state-of-the-art methods on IEMOCAPdataset with four emotion categories (i.e., angry, happy, sadand neutral) in both weighted accuracy (WA) and unweightedaccuracy (UA), with an improvement of 5.0% and 5.2% respectively under the ASR setting.
翻译:深层学习的新进展导致双向循环回发神经神经网络(Bi-RNNN)和关注机制,作为语音情绪识别的标准方法,提取和学习多式功能----音频和文字,然后将其用于下游情感分类任务。在本文中,我们提议了一个简单而有效的神经网络结构,以利用语音和语言法信息。拟议的框架使用多规模的共进层(MSCNN)获得音频和文字隐藏式演示。然后,一个统计集合单位(SPU)用来进一步提取每种模式的特征。在一边,可在MSCNN-SPU(音频和文字)和MSCNNN(文字)的顶部建立一个关注模块,以进一步改进感应。 广泛的实验显示,拟议的模型将IEMOCAPDataset(即愤怒、快乐、悲伤和中性)先前的状态方法与四种情感类别(即愤怒、快乐、悲伤和中性)相比,在加权精度准确性A5.0(WA)和未加权的精度下分别设定A%的精度。