Multimodal Emotion Recognition in Conversation (ERC) plays an influential role in the field of human-computer interaction and conversational robotics since it can motivate machines to provide empathetic services. Multimodal data modeling is an up-and-coming research area in recent years, which is inspired by human capability to integrate multiple senses. Several graph-based approaches claim to capture interactive information between modalities, but the heterogeneity of multimodal data makes these methods prohibit optimal solutions. In this work, we introduce a multimodal fusion approach named Graph and Attention based Two-stage Multi-source Information Fusion (GA2MIF) for emotion detection in conversation. Our proposed method circumvents the problem of taking heterogeneous graph as input to the model while eliminating complex redundant connections in the construction of graph. GA2MIF focuses on contextual modeling and cross-modal modeling through leveraging Multi-head Directed Graph ATtention networks (MDGATs) and Multi-head Pairwise Cross-modal ATtention networks (MPCATs), respectively. Extensive experiments on two public datasets (i.e., IEMOCAP and MELD) demonstrate that the proposed GA2MIF has the capacity to validly capture intra-modal long-range contextual information and inter-modal complementary information, as well as outperforms the prevalent State-Of-The-Art (SOTA) models by a remarkable margin.
翻译:会话中的多模式情感识别在人机交互和对话机器人领域中起着重要作用,因为它可以激励机器提供同理心服务。在近年来兴起的多模态数据建模研究领域中,启发人们整合多个感官的能力。几个基于图的方法声称可以捕捉模态间的交互信息,但是多模态数据的异质性使这些方法很难得到最优解。在本研究中,我们介绍了一种名为基于图和注意力的两阶段多源信息融合(GA2MIF)的多模态融合方法,用于会话中的情感检测。我们提出的方法绕过了将异构图作为模型输入的困难,同时消除了在构建图时复杂的冗余连接。GA2MIF通过利用Multi-head Directed Graph ATtention网络(MDGATs)和Multi-head Pairwise Cross-modal ATtention网络(MPCATs)分别聚焦于上下文建模和跨模态建模。在两个公共数据集(IEMOCAP和MELD)上的广泛实验表明,所提出的GA2MIF有能力有效地捕获模态内的长期上下文信息和模态间的互补信息,同时优于现有国际最先进模型(State-Of-The-Art,SOTA)的结果。