In medical image synthesis, model training could be challenging due to the inconsistencies between images of different modalities even with the same patient, typically caused by internal status/tissue changes as different modalities are usually obtained at a different time. This paper proposes a novel deep learning method, Structure-aware Generative Adversarial Network (SA-GAN), that preserves the shapes and locations of in-consistent structures when generating medical images. SA-GAN is employed to generate synthetic computed tomography (synCT) images from magnetic resonance imaging (MRI) with two parallel streams: the global stream translates the input from the MRI to the CT domain while the local stream automatically segments the inconsistent organs, maintains their locations and shapes in MRI, and translates the organ intensities to CT. Through extensive experiments on a pelvic dataset, we demonstrate that SA-GAN provides clinically acceptable accuracy on both synCTs and organ segmentation and supports MR-only treatment planning in disease sites with internal organ status changes.
翻译:在医学图像合成中,模型培训可能具有挑战性,因为不同模式的图像,即使与同一病人的图像之间也存在不一致之处,通常是由内部状况/问题变化造成的,因为不同模式通常在不同的时间获得。本文件提出一种新的深层次的学习方法,即结构认知基因反转网络(SA-GAN),在产生医学图像时保留不一致结构的形状和位置。SA-GAN用于从磁共振成像(MRI)中生成合成计算成像(合成合成成像(合成成像)图像,由两种平行流组成:全球流将MRI的输入转化为CT域,而本地流自动将不一致的器官分解,在磁共振动中保持其位置和形状,并将器官强度转化为CT。通过对骨盆数据集的广泛实验,我们证明SA-GAN在合成CT和器官分解两方面都提供临床可接受的准确性,并支持在器官状况发生内部变化的疾病地点进行只使用MS的治疗规划。