This paper deals with the utterance-level modalities missing problem with uncertain patterns on emotion recognition in conversation (ERC) task. Present models generally predict the speaker's emotions by its current utterance and context, which is degraded by modality missing considerably. Our work proposes a framework Missing-Modality Robust emotion Recognition (M2R2), which trains emotion recognition model with iterative data augmentation by learned common representation. Firstly, a network called Party Attentive Network (PANet) is designed to classify emotions, which tracks all the speakers' states and context. Attention mechanism between speaker with other participants and dialogue topic is used to decentralize dependence on multi-time and multi-party utterances instead of the possible incomplete one. Moreover, the Common Representation Learning (CRL) problem is defined for modality-missing problem. Data imputation methods improved by the adversarial strategy are used here to construct extra features to augment data. Extensive experiments and case studies validate the effectiveness of our methods over baselines for modality-missing emotion recognition on two different datasets.
翻译:本文论述在谈话中情绪识别模式不确定的发音模式缺失的发音模式问题。目前的模型一般通过目前的发音和背景来预测演讲者的情绪,这种情绪因模式缺失而大大降低。我们的工作提出了一个框架“失踪-模式暴动情绪识别” (M2R2),它通过学习的共同代表性,通过迭代数据增强数据。首先,一个名为“党感应网络(PANet)”的网络旨在对情绪进行分类,跟踪所有发言者的状态和背景。发言者与其他与会者之间的注意机制以及对话主题被用来分散对多时间和多党派言论的依赖,而不是将可能的不完整言论分散。此外,共同代表学习(CRL)问题被确定为模式失控问题。这里使用通过对抗性战略改进的数据估算方法来构建额外特征来增加数据。广泛的实验和案例研究证实我们的方法在两个不同的数据集上超越模式流出情绪识别基线的有效性。