In this work, a kernel attention module is presented for the task of EEG-based emotion classification with neural networks. The proposed module utilizes a self-attention mechanism by performing a kernel trick, demanding significantly fewer trainable parameters and computations than standard attention modules. The design also provides a scalar for quantitatively examining the amount of attention assigned during deep feature refinement, hence help better interpret a trained model. Using EEGNet as the backbone model, extensive experiments are conducted on the SEED dataset to assess the module's performance on within-subject classification tasks compared to other SOTA attention modules. Requiring only one extra parameter, the inserted module is shown to boost the base model's mean prediction accuracy up to more than 1\% across 15 subjects. A key component of the method is the interpretability of solutions, which is addressed using several different techniques, and is included throughout as part of the dependency analysis.
翻译:在这项工作中,为神经网络的基于EEG的情感分类任务提出了一个内核关注模块。拟议模块使用一个自留机制,实施内核诡计,要求比标准关注模块少得多的可训练参数和计算。设计还为从数量上审查深度地貌改进过程中所分配的注意量提供了一个尺度,从而帮助更好地解释一个经过培训的模式。利用EEGNet作为主干模型,在SEED数据集上进行了广泛的实验,以评估该模块与其他SOTA关注模块相比在主题内分类任务方面的性能。只要求一个额外参数,插入模块显示将基础模型的预测值平均精确度提高到15个主题的1 ⁇ 以上。该方法的一个关键组成部分是解决办法的可解释性,它使用几种不同的技术加以解决,并始终作为依赖性分析的一部分。