This study addresses the significant challenge of developing efficient decoding algorithms for classifying steady-state visual evoked potentials (SSVEPs) in scenarios characterized by extreme scarcity of calibration data, where only one calibration is available for each stimulus target. To tackle this problem, we introduce a novel cross-subject dual-domain fusion network (CSDuDoFN) incorporating task-related and task-discriminant component analysis (TRCA and TDCA) for one-shot SSVEP classification. The CSDuDoFN framework is designed to comprehensively transfer information from source subjects, while TRCA and TDCA are employed to exploit the single available calibration of the target subject. Specifically, we develop multi-reference least-squares transformation (MLST) to map data from both source subjects and the target subject into the domain of sine-cosine templates, thereby mitigating inter-individual variability and benefiting transfer learning. Subsequently, the transformed data in the sine-cosine templates domain and the original domain data are separately utilized to train a convolutional neural network (CNN) model, with the adequate fusion of their feature maps occurring at distinct network layers. To further capitalize on the calibration of the target subject, source aliasing matrix estimation (SAME) data augmentation is incorporated into the training process of the ensemble TRCA (eTRCA) and TDCA models. Ultimately, the outputs of the CSDuDoFN, eTRCA, and TDCA are combined for SSVEP classification. The effectiveness of our proposed approach is comprehensively evaluated on three publicly available SSVEP datasets, achieving the best performance on two datasets and competitive performance on one. This underscores the potential for integrating brain-computer interface (BCI) into daily life.
翻译:暂无翻译