Artificial neural networks (ANNs), originally inspired by biological neural networks (BNNs), have achieved remarkable successes in many tasks such as visual representation learning. However, whether there exists semantic correlations/connections between the visual representations in ANNs and those in BNNs remains largely unexplored due to both the lack of an effective tool to link and couple two different domains, and the lack of a general and effective framework of representing the visual semantics in BNNs such as human functional brain networks (FBNs). To answer this question, we propose a novel computational framework, Synchronized Activations (Sync-ACT), to couple the visual representation spaces and semantics between ANNs and BNNs in human brain based on naturalistic functional magnetic resonance imaging (nfMRI) data. With this approach, we are able to semantically annotate the neurons in ANNs with biologically meaningful description derived from human brain imaging for the first time. We evaluated the Sync-ACT framework on two publicly available movie-watching nfMRI datasets. The experiments demonstrate a) the significant correlation and similarity of the semantics between the visual representations in FBNs and those in a variety of convolutional neural networks (CNNs) models; b) the close relationship between CNN's visual representation similarity to BNNs and its performance in image classification tasks. Overall, our study introduces a general and effective paradigm to couple the ANNs and BNNs and provides novel insights for future studies such as brain-inspired artificial intelligence.
翻译:最初由生物神经网络(BNNs)启发的人工神经网络(ANNs)在许多任务中取得了显著的成功,例如视觉表现学习。然而,由于缺少有效的连接工具,并同时将两个不同的领域联系起来,以及缺乏一个普遍和有效的框架来代表生物神经网络(BNNs)中的视觉语义。为了回答这个问题,我们提议了一个新的计算框架,即同步动作(Synchronized Aactivations(Sync-ACT)),以根据自然主义功能磁共振成像(nfMRI)数据,将人类大脑中的ANNNNNS和BNNNTs之间的视觉表现显示空间和语义关系连接起来。通过这种方法,我们能够用人类大脑成像(FBBNR)中具有生物意义意义的描述,我们第一次用人类大脑成像(FBBNMR)中两种公开观察的图像模型和直观图像结构之间的虚拟关系,我们评估了S-ACT框架框架。