Reconstructing perceived natural images or decoding their categories from fMRI signals are challenging tasks with great scientific significance. Due to the lack of paired samples, most existing methods fail to generate semantically recognizable reconstruction and are difficult to generalize to novel classes. In this work, we propose, for the first time, a task-agnostic brain decoding model by unifying the visual stimulus classification and reconstruction tasks in a semantic space. We denote it as BrainCLIP, which leverages CLIP's cross-modal generalization ability to bridge the modality gap between brain activities, images, and texts. Specifically, BrainCLIP is a VAE-based architecture that transforms fMRI patterns into the CLIP embedding space by combining visual and textual supervision. Note that previous works rarely use multi-modal supervision for visual stimulus decoding. Our experiments demonstrate that textual supervision can significantly boost the performance of decoding models compared to the condition where only image supervision exists. BrainCLIP can be applied to multiple scenarios like fMRI-to-image generation, fMRI-image-matching, and fMRI-text-matching. Compared with BraVL, a recently proposed multi-modal method for fMRI-based brain decoding, BrainCLIP achieves significantly better performance on the novel class classification task. BrainCLIP also establishes a new state-of-the-art for fMRI-based natural image reconstruction in terms of high-level image features.
翻译:暂无翻译