EEG-based emotion recognition often requires sufficient labeled training samples to build an effective computational model. Labeling EEG data, on the other hand, is often expensive and time-consuming. To tackle this problem and reduce the need for output labels in the context of EEG-based emotion recognition, we propose a semi-supervised pipeline to jointly exploit both unlabeled and labeled data for learning EEG representations. Our semi-supervised framework consists of both unsupervised and supervised components. The unsupervised part maximizes the consistency between original and reconstructed input data using an autoencoder, while simultaneously the supervised part minimizes the cross-entropy between the input and output labels. We evaluate our framework using both a stacked autoencoder and an attention-based recurrent autoencoder. We test our framework on the large-scale SEED EEG dataset and compare our results with several other popular semi-supervised methods. Our semi-supervised framework with a deep attention-based recurrent autoencoder consistently outperforms the benchmark methods, even when small sub-sets (3\%, 5\% and 10\%) of the output labels are available during training, achieving a new state-of-the-art semi-supervised performance.
翻译:基于 EEG 的情绪识别往往要求有足够的标签培训样本,以建立一个有效的计算模型。另一方面,贴贴贴 EEG 数据往往费用昂贵且耗时费时。为了解决这个问题并减少在基于 EEG 的情绪识别背景下对输出标签的需求,我们提议建立一个半监督管道,以共同利用未贴标签和标签的数据来学习 EEEG 的表示方式。我们的半监督框架由不受监督和监管的组件组成。这个未监督部分使使用自动编码器的原始和再生输入数据之间的一致性最大化,同时受监督部分将输入和输出标签之间的交叉渗透性最小化。我们用堆叠叠式自动编码和基于关注的经常性自动编码来评估我们的框架。我们用大型SEEEG数据集测试我们的框架,并将我们的成果与其他流行的半监督方法进行比较。我们的半监督框架与基于深度关注的经常性自动编码的半监督框架一致地超越了基准方法,即使小型次级编码(3 ⁇ 、5 ⁇ 和10级) 的运行状态培训期间,小分级(3 ⁇ 、5 ⁇ 和10级) 能够实现新的输出。