Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic information but is limited in practice due to excessive data acquisition time. In this paper, we propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI using incomplete k-space data of several source modalities as inputs. The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality. Our proposed model is formulated as a variational problem that leverages several learnable modality-specific feature extractors and a multimodal synthesis module. We propose a learnable optimization algorithm to solve this model, which induces a multi-phase network whose parameters can be trained using multi-modal MRI data. Moreover, a bilevel-optimization framework is employed for robust parameter training. We demonstrate the effectiveness of our approach using extensive numerical experiments.
翻译:在本文中,我们提出了一个新的深层次学习模式,用于联合重建和合成多式MRI,使用若干来源模式的不完整k-空间数据作为投入。我们模型的输出包括源模式的重建图像和在目标模式中合成的高质量图像。我们提议的模型是一个变异问题,它利用若干可学习模式特定特征提取器和多式联运合成模块。我们建议采用一种可学习的优化算法来解决这一模式,从而产生一个多阶段网络,其参数可以利用多式MRI数据加以培训。此外,还采用了双级优化框架进行强力参数培训。我们用大量的数字实验来证明我们的方法的有效性。