Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.
翻译:通过源对目标模式的翻译对丢失的图像进行估计,可以改善医学成像协议的多样性。综合目标图像的普遍做法是通过基因对抗网络(GAN)进行一发图象的一发图象。然而,以图象分布为隐含特征的GAN模型可能因有限的样本忠诚度而受到影响。在这里,我们提出了一个基于对抗性传播模型的新颖方法,即SynDiff,以提高医学成像翻译的性能。为了捕捉图像分布的直接关联性,SynDiff利用一个有条件的传播过程,逐步将噪音和源图像映射到目标图像上。为了在推断期间进行快速和准确的图像取样,在反向传播方向进行对抗性图像的预测时,将采取大规模传播步骤。为了能够进行关于未受控制的数据集的培训,一个循环一致的结构是与两种模式之间双边翻译的相互矛盾和非干扰模块相结合的。报告对SynDiff在多调调MRI和MRI-CT翻译中与相互竞争的GAN和扩散模型的效用进行了广泛的评估。我们的演示表明SynDiff提供定量和定性优等优于竞争基线。