We present the MEEG dataset, a multi-modal collection of music-induced electroencephalogram (EEG) recordings designed to capture emotional responses to various musical stimuli across different valence and arousal levels. This public dataset facilitates an in-depth examination of brainwave patterns within musical contexts, providing a robust foundation for studying brain network topology during emotional processing. Leveraging the MEEG dataset, we introduce the Attention-based Temporal Learner with Dynamic Graph Neural Network (AT-DGNN), a novel framework for EEG-based emotion recognition. This model combines an attention mechanism with a dynamic graph neural network (DGNN) to capture intricate EEG dynamics. The AT-DGNN achieves state-of-the-art (SOTA) performance with an accuracy of 83.74% in arousal recognition and 86.01% in valence recognition, outperforming existing SOTA methods. Comparative analysis with traditional datasets, such as DEAP, further validates the model's effectiveness and underscores the potency of music as an emotional stimulus. This study advances graph-based learning methodology in brain-computer interfaces (BCI), significantly improving the accuracy of EEG-based emotion recognition. The MEEG dataset and source code are publicly available at https://github.com/xmh1011/AT-DGNN.
翻译:暂无翻译