Accurate recognition of human emotional states is critical for effective human-machine interaction. Electroencephalography (EEG) offers a reliable source for emotion recognition due to its high temporal resolution and its direct reflection of neural activity. Nevertheless, variations across recording sessions present a major challenge for model generalization. To address this issue, we propose EGDA, a framework that reduces cross-session discrepancies by jointly aligning the global (marginal) and class-specific (conditional) distributions, while preserving the intrinsic structure of EEG data through graph regularization. Experimental results on the SEED-IV dataset demonstrate that EGDA achieves robust cross-session performance, obtaining accuracies of 81.22%, 80.15%, and 83.27% across three transfer tasks, and surpassing several baseline methods. Furthermore, the analysis highlights the Gamma frequency band as the most discriminative and identifies the central-parietal and prefrontal brain regions as critical for reliable emotion recognition.
翻译:准确识别人类情绪状态对于实现有效的人机交互至关重要。脑电图因其高时间分辨率及对神经活动的直接反映,为情绪识别提供了可靠的数据源。然而,跨记录会话的变异对模型泛化构成了重大挑战。为解决该问题,我们提出EGDA框架,该框架通过联合对齐全局(边缘)分布与类特定(条件)分布来减少跨会话差异,同时利用图正则化保持脑电图数据的固有结构。在SEED-IV数据集上的实验结果表明,EGDA实现了鲁棒的跨会话性能,在三个迁移任务中分别获得81.22%、80.15%和83.27%的准确率,并超越多种基线方法。此外,分析表明Gamma频段最具判别力,并确定中央-顶叶区与前额叶脑区对可靠情绪识别至关重要。