Reconstructing perceived natural images or decoding their categories from fMRI signals are challenging tasks with great scientific significance. Due to the lack of paired samples, most existing methods fail to generate semantically recognizable reconstruction and are difficult to generalize to novel classes. In this work, we propose, for the first time, a task-agnostic brain decoding model by unifying the visual stimulus classification and reconstruction tasks in a semantic space. We denote it as BrainCLIP, which leverages CLIP's cross-modal generalization ability to bridge the modality gap between brain activities, images, and texts. Specifically, BrainCLIP is a VAE-based architecture that transforms fMRI patterns into the CLIP embedding space by combining visual and textual supervision. Note that previous works rarely use multi-modal supervision for visual stimulus decoding. Our experiments demonstrate that textual supervision can significantly boost the performance of decoding models compared to the condition where only image supervision exists. BrainCLIP can be applied to multiple scenarios like fMRI-to-image generation, fMRI-image-matching, and fMRI-text-matching. Compared with BraVL, a recently proposed multi-modal method for fMRI-based brain decoding, BrainCLIP achieves significantly better performance on the novel class classification task. BrainCLIP also establishes a new state-of-the-art for fMRI-based natural image reconstruction in terms of high-level image features.
翻译:重塑自然图像,或将其类别从FMRI信号中解码,是具有重大科学意义的具有挑战性的任务。由于缺乏配对样本,大多数现有方法无法产生可识别的地震重建,难以推广到新类。在这项工作中,我们首次提议通过在语义空间统一视觉刺激分类和重建任务,以任务不可知的大脑解码模式。我们把它称为大脑CLIP,利用CLIP的交叉模式通用能力来弥合大脑活动、图像和文本之间的模式差距。具体地说,BrackCLIP是一个基于VAE的架构,它通过合并视觉和文字监督,将FMRI模式转化为CLIP的嵌入空间。注意,以前的作品很少使用多模式监督来进行视觉刺激解码。我们的实验表明,文字监督可以大大提升解码模型的性能,而只有基于图像的监管才存在。CLOCLIP可以应用于多种情景,例如FMRI-图像生成、FMRI-CLIM-CLM 和最近为BRA-BRO-BRO-Real-Real-deal-dealalal-mamamat-mamamama-ma-deal-deal-deal-deal-mocal-deal-deal-mocal-mocal-mocal-mocal-deal-mocal-modal-moction-ma-moction-moction-ma-ma-ma-moction-moction-moction-moction-mod-moction-moction-moction-mocal-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-moction-mocal-moction-moction-moction-moction-moction-moction-mocal-moction-mocal-mocal-mocal-mocal-moction-moction-mod-mod-mod-moction-moction-moction-moction-mocal-mocal-mocal-moction-mocal-moction-</s>