Technique of emotion recognition enables computers to classify human affective states into discrete categories. However, the emotion may fluctuate instead of maintaining a stable state even within a short time interval. There is also a difficulty to take the full use of the EEG spatial distribution due to its 3-D topology structure. To tackle the above issues, we proposed a locally temporal-spatial pattern learning graph attention network (LTS-GAT) in the present study. In the LTS-GAT, a divide-and-conquer scheme was used to examine local information on temporal and spatial dimensions of EEG patterns based on the graph attention mechanism. A dynamical domain discriminator was added to improve the robustness against inter-individual variations of the EEG statistics to learn robust EEG feature representations across different participants. We evaluated the LTS-GAT on two public datasets for affective computing studies under individual-dependent and independent paradigms. The effectiveness of LTS-GAT model was demonstrated when compared to other existing mainstream methods. Moreover, visualization methods were used to illustrate the relations of different brain regions and emotion recognition. Meanwhile, the weights of different time segments were also visualized to investigate emotion sparsity problems.
翻译:情感识别技术使计算机能够将人类感官状态分为不同类别。然而,情绪可能会波动,而即使在很短的时间间隔内也不会维持稳定状态。由于EEG的3D地形结构,也很难充分利用其空间分布。为了解决上述问题,我们在本研究报告中提议了一个局部时间空间模式学习图示关注网(LTS-GAT),在LTS-GAT中,使用一种分而治之办法来审查基于图形关注机制的EEEG模式的时间和空间层面的地方信息。增加了一个动态域区分器,以提高对EEG统计数据的跨个体变化的稳健性,以学习不同参与者的EEG特征表现。我们评估了LTS-GAT在个人独立模式下用于影响计算研究的两个公共数据集。与其他现有主流方法相比,LTS-GAT模型的有效性得到了证明。此外,还使用了可视化方法来说明不同脑区域的关系和情感识别。与此同时,不同时段的重量也用于调查视觉问题。