Recently, interest in MR-only treatment planning using synthetic CTs (synCTs) has grown rapidly in radiation therapy. However, developing class solutions for medical images that contain atypical anatomy remains a major limitation. In this paper, we propose a novel spatial attention-guided generative adversarial network (attention-GAN) model to generate accurate synCTs using T1-weighted MRI images as the input to address atypical anatomy. Experimental results on fifteen brain cancer patients show that attention-GAN outperformed existing synCT models and achieved an average MAE of 85.22$\pm$12.08, 232.41$\pm$60.86, 246.38$\pm$42.67 Hounsfield units between synCT and CT-SIM across the entire head, bone and air regions, respectively. Qualitative analysis shows that attention-GAN has the ability to use spatially focused areas to better handle outliers, areas with complex anatomy or post-surgical regions, and thus offer strong potential for supporting near real-time MR-only treatment planning.
翻译:最近,对使用合成CT(合成CTs)进行只使用MSM的治疗规划的兴趣在辐射治疗方面迅速增长,然而,为含有非典型解剖学的医学图像开发类解决方案仍是一个主要限制因素。我们在本文件中建议采用一种新的空间关注引导基因对抗网络(注意-GAN)模型,以使用T1加权MRI图像生成准确的合成CT,作为解决非典型解剖学问题的投入。15个脑癌病人的实验结果表明,注意力-GAN优于现有的合成CT模型,实现了平均85.22美元/pm$12.08,232.41美元/pm60.86,246.38美元/pm42.67 Hounsfield单元,分别覆盖整个脑部、骨骼和空气区域。定性分析表明,关注-GAN有能力利用空间集中的地区更好地处理外科、复杂的解剖或外科后区域,从而为近实时MR-S-SIM治疗提供了强大的支持潜力。