Accurate and efficient electroencephalography (EEG) analysis is essential for detecting seizures and artifacts in long-term monitoring, with applications spanning hospital diagnostics to wearable health devices. Robust EEG analytics have the potential to greatly improve patient care. However, traditional deep learning models, especially Transformer-based architectures, are hindered by their quadratic time and memory complexity, making them less suitable for resource-constrained environments. To address these challenges, we present FEMBA (Foundational EEG Mamba + Bidirectional Architecture), a novel self-supervised framework that establishes new efficiency benchmarks for EEG analysis through bidirectional state-space modeling. Unlike Transformer-based models, which incur quadratic time and memory complexity, FEMBA scales linearly with sequence length, enabling more scalable and efficient processing of extended EEG recordings. Trained on over 21,000 hours of unlabeled EEG and fine-tuned on three downstream tasks, FEMBA achieves competitive performance in comparison with transformer models, with significantly lower computational cost. Specifically, it reaches 81.82% balanced accuracy (0.8921 AUROC) on TUAB and 0.949 AUROC on TUAR, while a tiny 7.8M-parameter variant demonstrates viability for resource-constrained devices. These results pave the way for scalable, general-purpose EEG analytics in both clinical and highlight FEMBA as a promising candidate for wearable applications.
翻译:准确高效的脑电图(EEG)分析对于长期监测中癫痫发作与伪迹的检测至关重要,其应用范围涵盖医院诊断至可穿戴健康设备。稳健的脑电图分析技术有望显著改善患者护理水平。然而,传统深度学习模型,特别是基于Transformer的架构,受限于其二次时间与内存复杂度,难以适配资源受限环境。为应对这些挑战,我们提出FEMBA(基础脑电图Mamba+双向架构),这是一种通过双向状态空间建模为脑电图分析建立全新效率基准的新型自监督框架。与具有二次时间与内存复杂度的Transformer模型不同,FEMBA的计算复杂度随序列长度呈线性增长,从而实现对长程脑电图记录更具可扩展性与高效性的处理。基于超过21,000小时未标注脑电图数据训练并在三项下游任务上微调后,FEMBA在显著降低计算成本的同时,取得了与Transformer模型相竞争的性能表现。具体而言,其在TUAB数据集上达到81.82%的平衡准确率(AUROC 0.8921),在TUAR数据集上获得0.949的AUROC值,而仅含780万参数的微型变体更验证了其在资源受限设备上的部署可行性。这些成果为临床与可穿戴场景中可扩展的通用脑电图分析开辟了道路,并彰显FEMBA作为可穿戴应用领域极具前景的解决方案。