In this paper, we propose TSception, a multi-scale convolutional neural network, to learn temporal dynamics and spatial asymmetry from affective electroencephalogram (EEG). TSception consists of dynamic temporal, asymmetric spatial, and high-level fusion Layers, which learn discriminative representations in the time and channel dimensions simultaneously. The dynamic temporal layer consists of multi-scale 1D convolutional kernels whose lengths are related to the sampling rate of the EEG signal, which learns its dynamic temporal and frequency representations. The asymmetric spatial layer takes advantage of the asymmetric neural activations underlying emotional responses, learning the discriminative global and hemisphere representations. The learned spatial representations will be fused by a high-level fusion layer. With robust nested cross-validation settings, the proposed method is evaluated on two publicly available datasets DEAP and AMIGOS. And the performance is compared with prior reported methods such as FBFgMDM, FBTSC, Unsupervised learning, DeepConvNet, ShallowConvNet, and EEGNet. The results indicate that the proposed method significantly (p<0.05) outperforms others in terms of classification accuracy. The proposed methods can be utilized in emotion regulation therapy for emotion recognition in the future. The source code can be found at: https://github.com/deepBrains/TSception-New
翻译:在本文中,我们提出Tsception,这是一个多尺度的进化神经网络,从感知性电脑图(EEG)中学习时间动态和空间不对称。Tsception由动态的时间、不对称的空间和高聚变层组成,在时间和频道层面同时学习歧视表现。动态时间层由多尺度 1D 进化内核组成,其长度与EEEG信号的取样率相关,该信号学习其动态时间和频率表现。不对称空间层利用不对称神经激活作为情感反应的基础,学习有区别的全球和半球表现。所学得来的空间表现将由高层次的聚变层组成。随着强大的嵌入式交叉校准环境,对拟议方法进行了评价,在两个公开的数据集DEAP和AMIGS上进行了评价。业绩与以前报告的方法进行了比较,如FBFBGMMMM、FBTS、FBTSC、无超导力学习、Deep Conpow ConNet和EEGNet,以及EGNet。结果显示,拟议的方法将大大地纳入一个高层次的情感/Mismical 的分类。在将来的分类中可以被使用的方法。在新的情感/Asmlimalmmmmmmmmmmmmmal上被识别源中找到的确认。