Decoding human visual neural representations is a challenging task with great scientific significance in revealing vision-processing mechanisms and developing brain-like intelligent machines. Most existing methods are difficult to generalize to novel categories that have no corresponding neural data for training. The two main reasons are 1) the under-exploitation of the multimodal semantic knowledge underlying the neural data and 2) the small number of paired (stimuli-responses) training data. To overcome these limitations, this paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features. We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models. Specifically, we leverage the mixture-of-product-of-experts formulation to infer a latent code that enables a coherent joint generation of all three modalities. To learn a more consistent joint representation and improve the data efficiency in the case of limited brain activity data, we exploit both intra- and inter-modality mutual information maximization regularization terms. In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories. Finally, we construct three trimodal matching datasets, and the extensive experiments lead to some interesting conclusions and cognitive insights: 1) decoding novel visual categories from human brain activity is practically possible with good accuracy; 2) decoding models using the combination of visual and linguistic features perform much better than those using either of them alone; 3) visual perception may be accompanied by linguistic influences to represent the semantics of visual stimuli. Code and data: https://github.com/ChangdeDu/BraVL.
翻译:为克服这些限制,本文件介绍了一种名为“BraVL”的通用神经解码方法,这种方法在披露视觉处理机制和开发像大脑的视觉语言语言智能机器方面具有巨大的科学意义。我们注重于通过多式深层基因化模型模拟大脑、视觉和语言特征之间的关系。具体地说,我们利用混合产品专家的配方代码来推断一种潜在代码,从而能够以一致的方式联合生成所有三种模式。在有限的大脑活动数据中,要学习更一致的联合代表,提高数据效率,我们利用内部和内部信息相互解码方法,利用大脑视觉语言特征的多式联运学习。特别是,我们侧重于通过多式的视觉深层基因化模型来模拟大脑、视觉和语言特征之间的关系。具体地说,我们利用混合产品-专家组合的配方代码来推断出一种潜在的代码。为了在有限的脑活动数据数据中学习更一致的演示,我们利用内部和内部信息的最大化信息。我们从三种半缩略式模型到外演化的大脑定义。我们从三个半缩略式模型中可以进行训练。最后,我们用视觉和文字的图像-Bral imal imal imal imal imal imal exal imal imal imal imact imact imactal imactation imactal imactal imactal imactal imact imactal imationalationalationalationalationalation imation imation impactus imations imation ex improutus imation imation imation imation imation imation imational imation imation imturms活动,我们从三个比我们从三个半为:我们从三个我们从三个半缩入算算算算算算算算算算算算算为:我们从三个算算算算算算算算算算算算算算算算算算算算算算算算算算算算算取了比我们从三个算算算算算算取了比为:我们从三个半或新算算算算算算算算算算算算算算算算算算算算算算算算为:我们从三个半缩算算算算算算算