Electroencephalography-to-Text generation (EEG-to-Text), which aims to directly generate natural text from EEG signals has drawn increasing attention in recent years due to the enormous potential for Brain-computer interfaces (BCIs). However, the remarkable discrepancy between the subject-dependent EEG representation and the semantic-dependent text representation poses a great challenge to this task. To mitigate this challenge, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C-SCL), which effectively re-calibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thus reducing the discrepancy. Specifically, our C-SCL pulls semantically similar EEG representations together while pushing apart dissimilar ones. Besides, in order to introduce more meaningful contrastive pairs, we carefully employ curriculum learning to not only craft meaningful contrastive pairs but also make the learning progressively. We conduct extensive experiments on the ZuCo benchmark and our method combined with diverse models and architectures shows stable improvements across three types of metrics while achieving the new state-of-the-art. Further investigation proves not only its superiority in both the single-subject and low-resource settings but also its robust generalizability in the zero-shot setting.
翻译:为了减轻这一挑战,我们设计了一项旨在直接产生来自EEG信号的自然文本的电文到电文生成(EEG-Text)课程,近年来,由于脑计算机界面的巨大潜力,这引起了越来越多的注意。然而,依赖主题的EEEG代表制和依赖语义的文本代表制之间的显著差异,给这项任务带来了巨大的挑战。为了减轻这一挑战,我们设计了一种基于主题的EEEG代表制(EEEG-Text-Text),有效地将依赖主题的EEEG代表制重新校正,从而减少了差异。具体地说,我们的C-SCL将相似的EEG代表制相拉在一起,同时将不同的代表制分开。此外,为了引入更有意义的对比性对配,我们谨慎地使用课程学习不仅设计有意义的对比配对,而且还使学习逐步进行。我们在ZuCo基准和我们的方法与不同模式和结构的结合中进行了广泛的实验,显示三种类型的衡量标准在新状态下稳步改进,但也缩小了差异。此外,在新的水平上也证明了其单一的优势。