Patterns of brain activity are associated with different brain processes and can be used to identify different brain states and make behavioral predictions. However, the relevant features are not readily apparent and accessible. To mine informative latent representations from multichannel recordings of ongoing EEG activity, we propose a novel differentiable decoding pipeline consisting of learnable filters and a pre-determined feature extraction module. Specifically, we introduce filters parameterized by generalized Gaussian functions that offer a smooth derivative for stable end-to-end model training and allow for learning interpretable features. For the feature module, we use signal magnitude and functional connectivity estimates. We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset, as well as on a new EEG dataset of unprecedented size (i.e., 761 subjects), where we identify consistent trends of music perception and related individual differences. The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening. This agrees with the respective specialisation of the temporal lobes regarding music perception proposed in the literature.
翻译:大脑活动模式与不同的大脑过程相关,可用于识别不同的大脑状态和作出行为预测。然而,相关特征并非显而易见,也不容易获取。对于从正在进行的EEEG活动的多频道记录中提取的信息性潜在表现,我们建议采用一种新的、可区分的管道,其中包括可学习过滤器和预先确定的特征提取模块。具体地说,我们引入通用高斯函数参数的过滤器,为稳定的端到端模型培训提供光滑衍生物,并允许学习可解释的特征。对于功能模块,我们使用信号量和功能连接估计。我们展示了我们的模型对于从EEEG在SECD数据集中的信号中识别情感的有用性,以及对于新的EEEG数据集的有用性,其规模前所未有(即761个主题),我们确定了音乐认知的一致趋势和相关的个人差异。所发现的特征与以前的神经科学研究相一致,并提供新的洞察力,例如音乐收听期间左侧和右侧时空区域功能连接特征的明显差异。这与文学中提议的音乐感知觉觉觉觉觉感知的时空线系分别具有特殊性。