Artificial intelligence and neuroscience are deeply interactive. Artificial neural networks (ANNs) have been a versatile tool to study the neural representation in the ventral visual stream, and the knowledge in neuroscience in return inspires ANN models to improve performance in the task. However, how to merge these two directions into a unified model has less studied. Here, we propose a hybrid model, called deep auto-encoder with the neural response (DAE-NR), which incorporates the information from the visual cortex into ANNs to achieve better image reconstruction and higher neural representation similarity between biological and artificial neurons. Specifically, the same visual stimuli (i.e., natural images) are input to both the mice brain and DAE-NR. The DAE-NR jointly learns to map a specific layer of the encoder network to the biological neural responses in the ventral visual stream by a mapping function and to reconstruct the visual input by the decoder. Our experiments demonstrate that if and only if with the joint learning, DAE-NRs can (i) improve the performance of image reconstruction and (ii) increase the representational similarity between biological neurons and artificial neurons. The DAE-NR offers a new perspective on the integration of computer vision and visual neuroscience.
翻译:人工智能和神经科学是深入互动的。人工神经网络(ANNs)一直是研究心神经在呼吸视觉流中的神经表现的多功能工具,神经科学方面的知识反过来激励了ANN的模型来改进任务绩效。然而,如何将这两个方向合并到一个统一的模型中来研究较少。在这里,我们提出了一个混合模型,称为“深自动摄像器”与神经反应(DAE-NR)相结合,将视觉皮层的信息纳入ANNs,以实现更好的图像重建以及生物神经与人工神经之间的更高神经相似性。具体地说,相同的视觉模拟(即自然图像)是小鼠大脑和DAE-NR的输入。DAE-NR共同学习通过绘图功能绘制进入神经神经反应中生物神经网络的具体层图谱,并重建解形器的视觉输入。我们的实验表明,只有通过联合学习,DAE-NRs才能(i)改进图像重建的绩效和神经神经科学的模型整合。