Objective: To evaluate the impact on Electroencephalography (EEG) classification of different kinds of attention mechanisms in Deep Learning (DL) models. Methods: We compared three attention-enhanced DL models, the brand-new InstaGATs, an LSTM with attention and a CNN with attention. We used these models to classify normal and abnormal (i.e., artifactual or pathological) EEG patterns. Results: We achieved the state of the art in all classification problems, regardless the large variability of the datasets and the simple architecture of the attention-enhanced models. We could also prove that, depending on how the attention mechanism is applied and where the attention layer is located in the model, we can alternatively leverage the information contained in the time, frequency or space domain of the dataset. Conclusions: with this work, we shed light over the role of different attention mechanisms in the classification of normal and abnormal EEG patterns. Moreover, we discussed how they can exploit the intrinsic relationships in the temporal, frequency and spatial domains of our brain activity. Significance: Attention represents a promising strategy to evaluate the quality of the EEG information, and its relevance, in different real-world scenarios. Moreover, it can make it easier to parallelize the computation and, thus, to speed up the analysis of big electrophysiological (e.g., EEG) datasets.
翻译:目标:评估对深学习模式中不同关注机制分类的影响。方法:我们比较了三种关注增强的DL模型,即品牌的新InstaGAT、关注的LSTM、关注的有CNN的LSTM、关注的LSTM、关注的有CNN。我们利用这些模型对正常和异常(即文物或病态) EEG模式进行分类。结果:我们在所有分类问题中都达到了最新水平,而不论数据集和关注强化模型的简单结构的巨大变异性。我们还可以证明,视关注机制是如何应用的,以及注意层位于模型中的位置,我们还可以利用数据集的时间、频率或空间域域内所含的信息。结论:通过这项工作,我们揭示了不同关注机制在正常和异常EEEG模式分类中的作用。此外,我们讨论了它们如何能够利用我们大脑活动的时间、频率和空间域的内在关系。标志:关注是评价EEG信息质量的很有希望的战略,从而使得EEG数据在电子学上具有更高的速度,从而使EG数据与EG数据进行更精确的分析。