Different categories of visual stimuli activate different responses in the human brain. These signals can be captured with EEG for utilization in applications such as Brain-Computer Interface (BCI). However, accurate classification of single-trial data is challenging due to low signal-to-noise ratio of EEG. This work introduces an EEG-ConvTranformer network that is based on multi-headed self-attention. Unlike other transformers, the model incorporates self-attention to capture inter-region interactions. It further extends to adjunct convolutional filters with multi-head attention as a single module to learn temporal patterns. Experimental results demonstrate that EEG-ConvTransformer achieves improved classification accuracy over the state-of-the-art techniques across five different visual stimuli classification tasks. Finally, quantitative analysis of inter-head diversity also shows low similarity in representational subspaces, emphasizing the implicit diversity of multi-head attention.
翻译:不同类别的视觉刺激刺激激活了人类大脑中的不同反应。 这些信号可以通过 EEG 捕捉到,用于大脑-计算机界面(BCI)等应用。 但是,由于EEG的信号-噪音比率较低,单审判数据的准确分类具有挑战性。 这项工作引入了一个基于多头自留的 EEG-ConvTranexter网络。 与其他变压器不同, 该模型包含自我注意以捕捉区域间互动的自我意识。 该模型还扩展到以多头关注作为学习时间模式的单一模块的辅助进动过滤器。 实验结果显示, EEG- Conv Transext在五个不同的视觉刺激分类任务中,对最新技术的分类准确性提高了。 最后,对头间多样性的定量分析也显示代表性子空间的相似性较低,强调了多头关注的隐含多样性。