Speech emotion recognition (SER) has been one of the significant tasks in Human-Computer Interaction (HCI) applications. However, it is hard to choose the optimal features and deal with imbalance labeled data. In this article, we investigate hybrid data augmentation (HDA) methods to generate and balance data based on traditional and generative adversarial networks (GAN) methods. To evaluate the effectiveness of HDA methods, a deep learning framework namely (ADCRNN) is designed by integrating deep dilated convolutional-recurrent neural networks with an attention mechanism. Besides, we choose 3D log Mel-spectrogram (MelSpec) features as the inputs for the deep learning framework. Furthermore, we reconfigure a loss function by combining a softmax loss and a center loss to classify the emotions. For validating our proposed methods, we use the EmoDB dataset that consists of several emotions with imbalanced samples. Experimental results prove that the proposed methods achieve better accuracy than the state-of-the-art methods on the EmoDB with 87.12% and 88.47% for the traditional and GAN-based methods, respectively.
翻译:在人类-计算机互动(HCI)应用中,语音感官识别(SER)是一个重要的任务之一。然而,很难选择最佳特征和处理标签数据不平衡的数据。在本条中,我们调查混合数据增强(HDA)方法,以传统和基因对抗网络(GAN)方法生成和平衡数据。为了评估HDA方法的有效性,设计了一个深层学习框架,即(ADCRNN),通过一个关注机制整合深层扩展的经常神经网络。此外,我们选择3Dlog Mel-spectrogram(Mel-spectrogram)特征作为深层学习框架的投入。此外,我们通过将软轴损失和中心损失结合起来对情绪进行分类,重新配置损失功能。为了验证我们提出的方法,我们使用EmoDB数据集,该数据集由几种情绪和不平衡的样本组成。实验结果证明,拟议的方法比EmoDB的状态方法的准确性更高,分别为87.12%和88.47%。