The inter/intra-subject variability of electroencephalography (EEG) makes the practical use of the brain-computer interface (BCI) difficult. In general, the BCI system requires a calibration procedure to tune the model every time the system is used. This problem is recognized as a major obstacle to BCI, and to overcome it, approaches based on transfer learning (TL) have recently emerged. However, many BCI paradigms are limited in that they consist of a structure that shows labels first and then measures "imagery", the negative effects of source subjects containing data that do not contain control signals have been ignored in many cases of the subject-to-subject TL process. The main purpose of this paper is to propose a method of excluding subjects that are expected to have a negative impact on subject-to-subject TL training, which generally uses data from as many subjects as possible. In this paper, we proposed a BCI framework using only high-confidence subjects for TL training. In our framework, a deep neural network selects useful subjects for the TL process and excludes noisy subjects, using a co-teaching algorithm based on the small-loss trick. We experimented with leave-one-subject-out validation on two public datasets (2020 international BCI competition track 4 and OpenBMI dataset). Our experimental results showed that confidence-aware TL, which selects subjects with small loss instances, improves the generalization performance of BCI.
翻译:电子脑电图学(EEG)的内/内源变异使得很难实际使用大脑-计算机界面(BCI),一般来说,BCI系统需要一种校准程序,以便每次使用系统时都对模型进行校准。这个问题被公认为是BCI的一个主要障碍,为了克服这一问题,最近出现了基于转移学习(TL)的方法。然而,许多BCI范式是有限的,因为它们包括一个结构,首先显示标签,然后是“图像”措施。 在许多主题对主题 TL 进程中,含有不包含控制信号的数据的来源主题的负面效应被忽略。本文的主要目的是提出一种方法,将预期对主题对主题TL培训产生消极影响的主题排除在外,而后者一般使用尽可能多的主题的数据。在本文中,我们提出了一个BCI框架,仅使用高信任科目进行TL培训。在我们框架内,一个深层神经网络为TL进程选择有用的主题,排除了调音调主题,而TL进程则使用双选的测试, 以双选的磁带数据测试为B。