Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users' emotions and generate empathetic responses. However, most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal information through feature concatenation. In order to explore a more effective way of utilizing both multimodal and long-distance contextual information, we propose a new model based on multimodal fused graph convolutional network, MMGCN, in this work. MMGCN can not only make use of multimodal dependencies effectively, but also leverage speaker information to model inter-speaker and intra-speaker dependency. We evaluate our proposed model on two public benchmark datasets, IEMOCAP and MELD, and the results prove the effectiveness of MMGCN, which outperforms other SOTA methods by a significant margin under the multimodal conversation setting.
翻译:对话中情感识别(ERC)是影响性对话系统的一个关键组成部分,有助于系统理解用户的情绪并产生同情性反应,然而,大多数工作侧重于示范演讲人和背景信息,主要以文本模式为主,或只是通过特征融合利用多式信息;为了探索一种更有效的方式利用多式联运和长距离背景信息,我们在此工作中提出了一个基于多式组合图形共变网络(MMGCN)的新模式。MGCN不仅可以有效地利用多式联运依赖性,还可以利用演讲人信息来模拟讲者之间和讲者内部的依赖性。我们评估了我们关于两个公共基准数据集(IEMOCAP和MELD)的拟议模式,其结果证明了MGCN的有效性,MGCN在多式对话环境中大大优于其他SOTA方法。