Deep learning has emerged as a powerful alternative to hand-crafted methods for emotion recognition on combined acoustic and text modalities. Baseline systems model emotion information in text and acoustic modes independently using Deep Convolutional Neural Networks (DCNN) and Recurrent Neural Networks (RNN), followed by applying attention, fusion, and classification. In this paper, we present a deep learning-based approach to exploit and fuse text and acoustic data for emotion classification. We utilize a SincNet layer, based on parameterized sinc functions with band-pass filters, to extract acoustic features from raw audio followed by a DCNN. This approach learns filter banks tuned for emotion recognition and provides more effective features compared to directly applying convolutions over the raw speech signal. For text processing, we use two branches (a DCNN and a Bi-direction RNN followed by a DCNN) in parallel where cross attention is introduced to infer the N-gram level correlations on hidden representations received from the Bi-RNN. Following existing state-of-the-art, we evaluate the performance of the proposed system on the IEMOCAP dataset. Experimental results indicate that the proposed system outperforms existing methods, achieving 3.5% improvement in weighted accuracy.
翻译:深层学习已成为一种强大的替代方法,可以替代人工制作的合成声学和文本模式的情绪识别方法。基线系统模拟文字和声学模式的情感信息,使用深相神经神经网络(DCNN)和经常性神经网络(RNN)独立使用文字和声学模式,然后进行关注、聚合和分类。在本文中,我们介绍了一种基于深深相学习的探索和导出文字和声学数据的方法,用于情感分类。我们使用基于带宽过滤器的参数感应功能的SincNet层,从原始音频中提取声学特征,然后由DCNN(DCN)进行。这个方法学习感应感应的过滤库,提供更有效的特征,与直接应用原始语音信号的相连接。对于文本处理,我们同时使用两个分支(DCNNNN和双向NNN(由DCNN(由DCN(由双向)跟踪跟踪),在从Bi-RNN(B-RNN)收到的隐藏的表达方式上,我们根据现有的状态评估了拟议的系统性能,我们评估了IMOCD数据集的性功能的性功能的性改进方法。实验结果显示了3.5。