Emotion Recognition in Conversation (ERC) is a crucial task for understanding human emotions and enabling natural human-computer interaction. Although Large Language Models (LLMs) have recently shown great potential in this field, their ability to capture the intrinsic connections between explicit and implicit emotions remains limited. We propose a novel ERC training framework, PRC-Emo, which integrates Prompt engineering, demonstration Retrieval, and Curriculum learning, with the goal of exploring whether LLMs can effectively perceive emotions in conversational contexts. Specifically, we design emotion-sensitive prompt templates based on both explicit and implicit emotional cues to better guide the model in understanding the speaker's psychological states. We construct the first dedicated demonstration retrieval repository for ERC, which includes training samples from widely used datasets, as well as high-quality dialogue examples generated by LLMs and manually verified. Moreover, we introduce a curriculum learning strategy into the LoRA fine-tuning process, incorporating weighted emotional shifts between same-speaker and different-speaker utterances to assign difficulty levels to dialogue samples, which are then organized in an easy-to-hard training sequence. Experimental results on two benchmark datasets-- IEMOCAP and MELD --show that our method achieves new state-of-the-art (SOTA) performance, demonstrating the effectiveness and generalizability of our approach in improving LLM-based emotional understanding.
翻译:对话情感识别是理解人类情感、实现自然人机交互的关键任务。尽管大型语言模型近期在该领域展现出巨大潜力,但其捕捉显性与隐性情感间内在联系的能力仍存在局限。本文提出一种新颖的对话情感识别训练框架PRC-Emo,该框架融合了提示工程、示例检索与课程学习,旨在探究大型语言模型能否有效感知对话情境中的情感。具体而言,我们基于显性与隐性情感线索设计情感敏感型提示模板,以更好地引导模型理解说话者的心理状态。我们构建了首个专用于对话情感识别的示例检索库,其中包含来自广泛使用数据集的训练样本,以及由大型语言模型生成并经人工验证的高质量对话示例。此外,我们将课程学习策略引入LoRA微调过程,通过融合同一说话者与不同说话者话语间的加权情感转移来划分对话样本的难度等级,进而按由易到难的顺序组织训练序列。在IEMOCAP和MELD两个基准数据集上的实验结果表明,我们的方法取得了新的最优性能,验证了该方法在提升基于大型语言模型的情感理解能力方面的有效性与泛化性。