Multimodal magnetic resonance imaging (MRI) can reveal different patterns of human tissue and is crucial for clinical diagnosis. However, limited by cost, noise and manual labeling, obtaining diverse and reliable multimodal MR images remains a challenge. For the same lesion, different MRI manifestations have great differences in background information, coarse positioning and fine structure. In order to obtain better generation and segmentation performance, a coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed. The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module. The two modules can make full use of the captured location information, accurately locating the interested region, and enhancing the generator model network structure. The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality. There exist some problems in the original CycleGAN that the training time is long, the parameter amount is too large, and it is difficult to converge. In response to this problem, we introduce the Coordinate Attention (CA) module to replace the Res Block to reduce the number of parameters, and cooperate with the spatial information extraction network above to strengthen the information extraction ability. On the basis of CASP-GAN, an attentional generative cross-modality segmentation (AGCMS) method is further proposed. This method inputs the modalities generated by CASP-GAN and the real modalities into the segmentation network for brain tumor segmentation. Experimental results show that CASP-GAN outperforms CycleGAN and some state-of-the-art methods in PSNR, SSMI and RMSE in most tasks.
翻译:多模式磁共振成像(MRI)能够显示出不同的组织模式,并对临床诊断至关重要。但是,由于成本、噪声和手动标记的限制,获取多种不同的可靠的多模态 MR 图像仍然是一个挑战。同一病变的不同 MRI 显示方式在背景信息、粗定位和细微结构方面存在巨大差异。为了获得更好的生成和分割性能,提出了一种基于协同时空注意力生成对抗网络(CASP-GAN)的协同多模态 MR 图像生成方法。该生成器的性能通过引入 Coordinate Attention(CA) 模块和 Spatial Attention(SA) 模块进行优化。这两个模块可以充分利用所捕获的位置信息,精确定位感兴趣的区域,并增强生成器模型的网络结构。提取原始医学图像的结构信息和详细信息能够有助于生成更高质量的所需图像。原始的 CycleGAN 存在一些问题,如训练时间长、参数量过大以及难以收敛。针对这个问题,我们引入 Coordinate Attention(CA) 模块来代替 Res Block,以减少参数数量,并配合上面的空间信息提取网络来增强信息提取能力。在 CASP-GAN 的基础上,进一步提出了一种注意力生成的交叉模态分割(AGCMS)方法。该方法将由 CASP-GAN 生成的模式和真实模式输入分割网络进行脑瘤分割。实验结果表明,CASP-GAN 在大多数任务中的 PSNR、SSMI 和 RMSE 等方面优于 CycleGAN 和一些现有方法。