Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer architectures. Transformers implemented only with attention with encoder-decoder structure following seq2seq without using RNN, but had better performance than RNN. Herein, we investigate the decoding technique for electroencephalography (EEG) composed of self-attention module from transformer architecture during imagined speech and overt speech. We performed classification of nine subjects using convolutional neural network based on EEGNet that captures temporal-spectral-spatial features from EEG of imagined speech and overt speech. Furthermore, we applied the self-attention module to decoding EEG to improve the performance and lower the number of parameters. Our results demonstrate the possibility of decoding brain activities of imagined speech and overt speech using attention modules. Also, only single channel EEG or ear-EEG can be used to decode the imagined speech for practical BCIs.
翻译:变异器是改变深层学习流的开拓性建筑,许多高性能模型正在以变压器结构为基础开发。变异器在后2seq 之后,在以编码器-解码器结构的注意下,不使用 RNN,但性能优于 RNN。在这里,我们调查了由变异器结构的自我注意模块组成的电脑分析解码技术(EEEG),在想象的演讲和公开讲话中,由变异器结构的自我注意模块组成。我们利用基于EEGNet的共振神经网络对九个科目进行了分类,该网络从EEGNet中捕捉到想象的语音和公开讲话的时光谱空间特征。此外,我们运用了自我注意模块来解码 EEG 来提高性能并降低参数数量。我们的结果显示了利用注意模块解码想象的语音和公开演讲的大脑活动的可能性。此外,只有单一的EEEG或耳EEG可以用来解码实用的 BCIs。