Non-invasive cortical neural interfaces have only achieved modest performance in cortical decoding of limb movements and their forces, compared to invasive brain-computer interfaces (BCIs). While non-invasive methodologies are safer, cheaper and vastly more accessible technologies, signals suffer from either poor resolution in the space domain (EEG) or the temporal domain (BOLD signal of functional Near Infrared Spectroscopy, fNIRS). The non-invasive BCI decoding of bimanual force generation and the continuous force signal has not been realised before and so we introduce an isometric grip force tracking task to evaluate the decoding. We find that combining EEG and fNIRS using deep neural networks works better than linear models to decode continuous grip force modulations produced by the left and the right hand. Our multi-modal deep learning decoder achieves 55.2 FVAF[%] in force reconstruction and improves the decoding performance by at least 15% over each individual modality. Our results show a way to achieve continuous hand force decoding using cortical signals obtained with non-invasive mobile brain imaging has immediate impact for rehabilitation, restoration and consumer applications.
翻译:与入侵的大脑-计算机界面相比,非侵入性方法更安全、更便宜和更容易获得的技术,但信号在空间领域或时空领域(红外光谱学附近功能性能的BOLD信号,FNIRS)的分辨率不高。二元力量生成的非侵入性BCI解码和连续力信号在以前没有实现,因此我们引入了异位控制队跟踪任务来评估解码。我们发现,使用深神经网络将EEEG和FNIRS结合起来比线性模型在解开左手和右手产生的连续控制力调制方面效果更好。我们的多式深层学习解密器在武力重建中达到了55.2 FVAF%,在每种模式下至少15%的解码性能上提高了。我们的结果表明,使用直线性神经网络将EEG和FNIRS结合起来,比线性模型在解码上对左手和右手产生的连续控制力调节力调节器进行操作。我们的多式深深层学习解码解码器在进行重建过程中达到55.2 FVAFAF[%],并在每种方式上提高解解解码性性性性性表现。我们的成果展示了一种方法在恢复消费者影响上取得了连续手力解解码信号和脑图像应用应用应用应用的恢复。