Cross-dataset emotion recognition as an extremely challenging task in the field of EEG-based affective computing is influenced by many factors, which makes the universal models yield unsatisfactory results. Facing the situation that lacks EEG information decoding research, we first analyzed the impact of different EEG information(individual, session, emotion and trial) for emotion recognition by sample space visualization, sample aggregation phenomena quantification, and energy pattern analysis on five public datasets. Based on these phenomena and patterns, we provided the processing methods and interpretable work of various EEG differences. Through the analysis of emotional feature distribution patterns, the Individual Emotional Feature Distribution Difference(IEFDD) was found, which was also considered as the main factor of the stability for emotion recognition. After analyzing the limitations of traditional modeling approach suffering from IEFDD, the Weight-based Channel-model Matrix Framework(WCMF) was proposed. To reasonably characterize emotional feature distribution patterns, four weight extraction methods were designed, and the optimal was the correction T-test(CT) weight extraction method. Finally, the performance of WCMF was validated on cross-dataset tasks in two kinds of experiments that simulated different practical scenarios, and the results showed that WCMF had more stable and better emotion recognition ability.
翻译:面对缺乏EEG信息解码研究的情况,我们首先分析了不同EEG信息(个人、会议、情感和试验)的影响,以便通过空间可视化样本、聚合现象样本量化和五个公共数据集的能源模式分析来识别情感。根据这些现象和模式,我们提供了各种EEG差异的处理方法和可解释的工作。通过分析情感特征分布模式,发现了个人情感特征分布差异(IEFD),这也被视为情感识别稳定的主要因素。在分析了受IEDF影响的传统模型方法(个人、会议、情感和试验)的局限性之后,提出了基于视力的频道模型矩阵框架(WCMF),以合理描述情感特征分布模式,设计了四种重量提取方法,最理想的是纠正T-T(CT)重量提取方法。最后,在两种实验中,对WCMF的跨数据化工作表现进行了验证,这些实验的结果是更稳定、更能模拟的MFDM结果。