Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains unanswered. Capitalizing on advances in machine learning, we integrated attention modules into brain decoders to facilitate an in-depth interpretation of DNN channels. A 4D convolution operation was also included to extract temporo-spatial interaction within the fMRI signal. The experiments showed that the proposed model obtains a very high accuracy (97.4%) and outperforms previous researches on the 7 different task benchmarks from the Human Connectome Project (HCP) dataset. The visualization analysis further illustrated the hierarchical emergence of task-specific masks with depth. Finally, the model was retrained to regress individual traits within the HCP and to classify viewing images from the BOLD5000 dataset, respectively. Transfer learning also achieves good performance. A further visualization analysis shows that, after transfer learning, low-level attention masks remained similar to the source domain, whereas high-level attention masks changed adaptively. In conclusion, the proposed 4D model with attention module performed well and facilitated interpretation of DNNs, which is helpful for subsequent research.
翻译:从神经成像信号中解析大脑的认知状态是神经科学的一个重要主题。近年来,深神经网络(DNN)被招聘为多个大脑国家解码并取得良好业绩。然而,如何解释DNN黑盒的开放问题仍未解答。利用机器学习的进展,我们将注意力模块纳入大脑解码器,以便利对DNN频道进行深入解释。一个4D演动操作也被包括在FMRI信号中提取节奏-空间互动。实验显示,拟议的模型获得了非常高的精确度(97.4%),并超越了人类连接器项目数据集7个不同任务基准的以往研究。视觉化分析进一步说明,具体任务面具的等级存在深度。最后,该模型被重新训练到HCP内部的个别特征,并分别对BOLD5000数据集的图像进行分类。传输学习也取得了良好的业绩。进一步的视觉化分析显示,在转移学习后,低层次关注面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面色色色色色色色色色色色,为高,对四下,为高。高,对四面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面面,对面部下,对面部下,对面面面部下,对面部下,对面部下,对面部下,对面部下,对面面部下,对面部下,对面面面面面面部下,对面部下,对面部下,对下,对分析分析显示,对面部分析分析分析分析分析分析分析显示,对面面面部分析分析分析显示分析显示分析分析显示,对分析显示分析显示分析显示分析显示,对面面面面面面面面面面面面面面面面面面部分析分析分析分析分析分析分析分析分析分析分析分析分析。