This paper tackles the problem of subject adaptive EEG-based visual recognition. Its goal is to accurately predict the categories of visual stimuli based on EEG signals with only a handful of samples for the target subject during training. The key challenge is how to appropriately transfer the knowledge obtained from abundant data of source subjects to the subject of interest. To this end, we introduce a novel method that allows for learning subject-independent representation by increasing the similarity of features sharing the same class but coming from different subjects. With the dedicated sampling principle, our model effectively captures the common knowledge shared across different subjects, thereby achieving promising performance for the target subject even under harsh problem settings with limited data. Specifically, on the EEG-ImageNet40 benchmark, our model records the top-1 / top-3 test accuracy of 72.6% / 91.6% when using only five EEG samples per class for the target subject. Our code is available at https://github.com/DeepBCI/Deep-BCI/tree/master/1_Intelligent_BCI/Inter_Subject_Contrastive_Learning_for_EEG.
翻译:本文解决了基于适应性EEG的视觉识别主题问题。 它的目标是精确预测基于EEG信号的视觉刺激类别,只对培训中的目标科目进行少量样本。 关键的挑战是如何将从源主题的大量数据中获得的知识适当转让给感兴趣的主题。 为此,我们引入了一种新的方法,通过增加同一类别共有但来自不同主题的特征的相似性,学习独立主题代表。 有了专门的抽样原则, 我们的模型有效地捕捉了不同主题之间共享的共同知识, 从而即使在有有限数据的严酷问题设置下, 也为目标对象实现有希望的业绩。 具体来说, 在 EEEG- ImaageNet40 基准中, 我们的模型记录了最高-1/最高-3测试精度72.6%/91.6%的测试精度, 当目标主题只使用5个EEG样本时, 我们的代码可以在 https://github.com/DeepBCI/Deep- BCI/tree/master/1_ Intellgent_ BCI/ Intligentral_ BCI/ BCI/ Intricentrent_ Conspect_Subject_ Contrastrat_LEG_ Lest_ EGEGEGEGEGEG.