The audio-video based multimodal emotion recognition has attracted a lot of attention due to its robust performance. Most of the existing methods focus on proposing different cross-modal fusion strategies. However, these strategies introduce redundancy in the features of different modalities without fully considering the complementary properties between modal information, and these approaches do not guarantee the non-loss of original semantic information during intra- and inter-modal interactions. In this paper, we propose a novel cross-modal fusion network based on self-attention and residual structure (CFN-SR) for multimodal emotion recognition. Firstly, we perform representation learning for audio and video modalities to obtain the semantic features of the two modalities by efficient ResNeXt and 1D CNN, respectively. Secondly, we feed the features of the two modalities into the cross-modal blocks separately to ensure efficient complementarity and completeness of information through the self-attention mechanism and residual structure. Finally, we obtain the output of emotions by splicing the obtained fused representation with the original representation. To verify the effectiveness of the proposed method, we conduct experiments on the RAVDESS dataset. The experimental results show that the proposed CFN-SR achieves the state-of-the-art and obtains 75.76% accuracy with 26.30M parameters. Our code is available at https://github.com/skeletonNN/CFN-SR.
翻译:以视听为基础的多式联运情感认识因其表现强劲而引起人们的极大关注。大多数现有方法侧重于提出不同的跨模式融合战略。然而,这些战略在不充分考虑模式信息互补特性的情况下,在不同模式的特点中引入冗余,这些方法并不能保证在模式内和模式间互动中不丢失原始语义信息。在本文件中,我们提议建立一个基于自我注意和剩余结构的新颖的跨模式融合网络,以获得多式联运情感认知。首先,我们通过高效的ResNeXt和1DCNN分别进行音频和视频模式模式模式的表达学习,以获得两种模式的语义特征。第二,我们将两种模式的特征单独注入跨模式,以确保通过自我保存机制和剩余结构确保信息的高效互补和完整。最后,我们通过将获得的链接代表形式和原始代表形式(CFNF-SR-SR-RFDES/RS-S-M ) 来获得情感的输出。我们用RAVDESS/RM-M-M的状态数据设置的实验结果显示我们提出的C-M-M/MRC-C/M/M/M/RFS-C/M30号的精确度参数为75。试验结果显示我们提出的C-RM-RM-C/M-C/M-C/M-C-C-C/M-C-C-C/M-S-S-S-S-S-SDRM-S-C/C/C/C/C_30/C/C/CO_30的精确性参数。