Multi-modal medical images provide complementary soft-tissue characteristics that aid in the screening and diagnosis of diseases. However, limited scanning time, image corruption and various imaging protocols often result in incomplete multi-modal images, thus limiting the usage of multi-modal data for clinical purposes. To address this issue, in this paper, we propose a novel unified multi-modal image synthesis method for missing modality imputation. Our method overall takes a generative adversarial architecture, which aims to synthesize missing modalities from any combination of available ones with a single model. To this end, we specifically design a Commonality- and Discrepancy-Sensitive Encoder for the generator to exploit both modality-invariant and specific information contained in input modalities. The incorporation of both types of information facilitates the generation of images with consistent anatomy and realistic details of the desired distribution. Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities. The module performs both hard integration and soft integration, ensuring the effectiveness of feature combination while avoiding information loss. Verified on two public multi-modal magnetic resonance datasets, the proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.
翻译:多模态医学图像提供了互补的软组织特征,有助于筛查和诊断疾病。然而,有限的扫描时间、图像损坏和各种成像协议经常导致不完整的多模态图像,从而限制了多模态数据在临床上的使用。为解决这一问题,本文提出了一种新颖的统一多模态图像合成方法,用于遗漏模态的填充。我们的方法基本采用了生成对抗式结构,旨在通过单个模型从任何可用组合中合成遗漏的多模态图像。为此,我们专门为生成器设计了一种Commonality- and Discrepancy-Sensitive编码器,用于利用输入模态中包含的模态不变和特定的信息。这两种信息的结合有助于生成具有一致的解剖结构和所需分布的逼真细节图像。此外,我们提出了一种Dynamic Feature Unification模块,用于集成来自不同数量可用模态的信息,从而使网络对随机遗漏模态具有鲁棒性。该模块执行硬贴合和软贴合,确保特征组合的有效性,同时避免信息丢失。我们基于两个公共的多模态磁共振数据集进行了验证,该提出的方法在处理各种合成任务方面非常有效,并且表现出优于先前方法的性能。