Most models in cognitive and computational neuroscience trained on one subject do not generalize to other subjects due to individual differences. An ideal individual-to-individual neural converter is expected to generate real neural signals of one subject from those of another one, which can overcome the problem of individual differences for cognitive and computational models. In this study, we propose a novel individual-to-individual EEG converter, called EEG2EEG, inspired by generative models in computer vision. We applied THINGS EEG2 dataset to train and test 72 independent EEG2EEG models corresponding to 72 pairs across 9 subjects. Our results demonstrate that EEG2EEG is able to effectively learn the mapping of neural representations in EEG signals from one subject to another and achieve high conversion performance. Additionally, the generated EEG signals contain clearer representations of visual information than that can be obtained from real data. This method establishes a novel and state-of-the-art framework for neural conversion of EEG signals, which can realize a flexible and high-performance mapping from individual to individual and provide insight for both neural engineering and cognitive neuroscience.
翻译:认知和计算神经科学中大多数模型在一个受试者上训练而不适用于其他受试者,由于个体间的差异。理想的个体间神经转换器应该能够从另一个受试者的真实神经信号生成另一个受试者的实际神经信号,这可以克服认知和计算模型的个体间差异等问题。在这项研究中,我们提出了一种新颖的个体对个体的脑电图转换器,称为 EEG2EEG,受到计算视觉中的生成模型的启发。我们采用 THINGS EEG2 数据集来训练和测试 72 个独立的 EEG2EEG 模型,对应于 9 个受试者之间的 72 对。我们的结果表明,EEG2EEG 能够有效地学习从一个受试者到另一个受试者的神经表示的映射,并达到高水平的转换性能。此外,生成的 EEG 信号包含更清晰的视觉信息表示,而无法从真实数据中获得。这种方法为 EEG 信号的神经转换建立了一种新颖的和最先进的框架,可以实现从个体到个体的灵活和高性能的映射,并为神经工程学和认知神经科学提供洞察。