Decoding imagined speech from human brain signals is a challenging and important issue that may enable human communication via brain signals. While imagined speech can be the paradigm for silent communication via brain signals, it is always hard to collect enough stable data to train the decoding model. Meanwhile, spoken speech data is relatively easy and to obtain, implying the significance of utilizing spoken speech brain signals to decode imagined speech. In this paper, we performed a preliminary analysis to check whether if it would be possible to utilize spoken speech electroencephalography data to decode imagined speech, by simply applying the pre-trained model with spoken speech brain signals to decode imagined speech brain signals. While the classification performance of imagined speech data solely used to train and validation was 30.5 \%, the transferred performance of spoken speech based classifier to imagined speech data was 26.8 \% with no significant difference found compared to the imagined speech based classifier (p = 0.0983, chi-square = 4.64). For more comprehensive analysis, we compared the result with the visual imagery dataset, which would naturally be less related to spoken speech compared to the imagined speech. As a result, visual imagery have shown solely trained performance of 31.8 \% and transferred performance of 26.3 \% which had shown significant statistical difference between each other (p = 0.022, chi-square = 7.64). Our results imply the potential of applying spoken speech to decode imagined speech, as well as their underlying common features.
翻译:从人类大脑信号中解析想象中的言语是一个具有挑战性和重要性的问题,可以通过大脑信号进行人类交流。虽然想象中的言语可以成为通过大脑信号进行静默通信的范例,但总是难以收集足够稳定的数据来训练解码模式。与此同时,口语数据相对容易获取,这意味着使用语音脑信号来解码想象中的言语的意义。在本文中,我们进行了初步分析,以检查是否有可能利用语音电子脑学数据来解解码想象中的言语,简单地将预先训练的言语脑信号模型用于解码想象中的言语脑信号。虽然仅用于培训和验证的想象语音信号数据的分类性能为30.5 ⁇,但基于口语的叙级数据的转换性能为26.8 ⁇ ,与基于想象的言语变能力(p= 0.0, chiquare= )相比,没有明显差异。 关于更全面分析的结果,我们与视觉图像数据集,这自然与语音与想象中的言语调相对较少。