Machine learning has achieved great success in electroencephalogram (EEG) based brain-computer interfaces (BCIs). Most existing BCI research focused on improving its accuracy, but few had considered its security. Recent studies, however, have shown that EEG-based BCIs are vulnerable to adversarial attacks, where small perturbations added to the input can cause misclassification. Detection of adversarial examples is crucial to both the understanding of this phenomenon and the defense. This paper, for the first time, explores adversarial detection in EEG-based BCIs. Experiments on two EEG datasets using three convolutional neural networks were performed to verify the performances of multiple detection approaches. We showed that both white-box and black-box attacks can be detected, and the former are easier to detect.
翻译:机器学习在以脑电图为基础的大脑-计算机界面(BCI)方面取得了巨大成功。现有的BCI研究大多侧重于提高其准确性,但很少考虑其安全性。然而,最近的研究表明,基于EEG的BCI很容易受到对抗性攻击,输入中添加的小扰动可能导致分类错误。发现对抗性例子对了解这一现象和辩护都至关重要。本文首次探讨了在以EEG为基础的BCIs的对抗性探测。利用三个相向神经网络对两个EEG数据集进行了实验,以核实多种探测方法的性能。我们显示,可以检测到白箱和黑箱攻击,而前者更容易检测。