As the use of interactive machines grow, the task of Emotion Recognition in Conversation (ERC) became more important. If the machine-generated sentences reflect emotion, more human-like sympathetic conversations are possible. Since emotion recognition in conversation is inaccurate if the previous utterances are not taken into account, many studies reflect the dialogue context to improve the performances. Many recent approaches show performance improvement by combining knowledge into modules learned from external structured data. However, structured data is difficult to access in non-English languages, making it difficult to extend to other languages. Therefore, we extract the pre-trained memory using the pre-trained language model as an extractor of external knowledge. We introduce CoMPM, which combines the speaker's pre-trained memory with the context model, and find that the pre-trained memory significantly improves the performance of the context model. CoMPM achieves the first or second performance on all data and is state-of-the-art among systems that do not leverage structured data. In addition, our method shows that it can be extended to other languages because structured knowledge is not required, unlike previous methods. Our code is available on github (https://github.com/rungjoo/CoMPM).
翻译:随着互动机器的使用增加,情感在对话中的认知(ERC)的任务就变得更加重要了。如果机器生成的句子反映情感,就有可能进行更像人类的同情性对话。由于不考虑先前的语句,在对话中的情感识别是不准确的,因此许多研究反映了对话的背景,以改善性能。许多最近的方法表明,通过将知识与从外部结构数据中学习的模块相结合,提高了绩效。然而,结构化数据很难以非英语获得,因此难以推广到其他语言。因此,我们利用预先培训的语言模型提取预先培训的记忆,作为外部知识的提取者。我们引入了COM,将演讲者经过培训的记忆与上下文模型结合起来,发现经过培训的记忆显著改进了上下文模型的性能。 CoMPM在所有数据上实现了第一或第二个性能,而且是不利用结构化数据的系统中的状态。此外,我们的方法显示,它可以推广到其他语言,因为与以前的方法不同,不需要结构化知识。我们的代码可以在Giuthbub/Mrmung上找到。 (http://gis/Mubromrom)。