Automatic emotion recognition plays a key role in computer-human interaction as it has the potential to enrich the next-generation artificial intelligence with emotional intelligence. It finds applications in customer and/or representative behavior analysis in call centers, gaming, personal assistants, and social robots, to mention a few. Therefore, there has been an increasing demand to develop robust automatic methods to analyze and recognize the various emotions. In this paper, we propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities. More specifically, we i) adapt a residual network (ResNet) based model trained on a large-scale speaker recognition task using transfer learning along with a spectrogram augmentation approach to recognize emotions from speech, and ii) use a fine-tuned bidirectional encoder representations from transformers (BERT) based model to represent and recognize emotions from the text. The proposed system then combines the ResNet and BERT-based model scores using a late fusion strategy to further improve the emotion recognition performance. The proposed multimodal solution addresses the data scarcity limitation in emotion recognition using transfer learning, data augmentation, and fine-tuning, thereby improving the generalization performance of the emotion recognition models. We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture (IEMOCAP) dataset. Experimental results indicate that both audio and text-based models improve the emotion recognition performance and that the proposed multimodal solution achieves state-of-the-art results on the IEMOCAP benchmark.
翻译:自动情绪识别在计算机-人类互动中起着关键作用,因为它有可能以情感智能丰富下一代人工智能,从而丰富下一代人工智能。它发现在呼叫中心、游戏中心、个人助理和社会机器人中,客户和/或代表性行为分析中的应用、赌博、个人助理和社会机器人等等。因此,越来越需要开发强有力的自动分析方法来分析和认识各种情感。在本文件中,我们提议一个神经网络情感识别框架,利用超时融合传输-获取和从言论和文本模式中微调模型。更具体地说,我们i)调整一个以大规模语音识别任务为基础的剩余网络(ResNet)模型,该模型使用传输学习和光谱增强方法来识别言论中的情感和/或有代表性的行为分析,二)使用基于变压器模型的微调双向导解码表达方式来表达和认识各种情感。拟议系统随后将ResNet和基于音频网络的模型评分组合在一起,利用迟融合战略来进一步改善情绪识别绩效。拟议多式联运解决方案解决了在情感识别方面的数据稀缺度限制问题,同时采用转移学习、数据增强数据采集-情感定位增强和感官定位强化方法,从而改进了我们所拟的情感-感官感官-感官评估关于感官-感官定位-感官-感官-感官分析方法的感官识别方法。