Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: 1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; 2) the models suffer from information loss because they only consider single-scale feature representations; and 3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CA-GANformer, a novel type of generative adversarial transformers, for medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CA-GANformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining absolute 2.54%-5.88% improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model's inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CA-GANformer a strong starting point for downstream medical image analysis tasks. Codes and models will be available to the public.
翻译:在医学图像分析领域,变异器在模拟长期依赖性方面取得了显著进展。然而,目前的变异器模型存在若干不利之处:(1) 现有方法未能捕捉到图像的重要特征,因为天真象征化方案;(2) 模型因信息丢失,因为它们只考虑单一尺度特征表示;(3) 模型产生的分解标签图不够准确,不考虑丰富的语义背景和解剖质谱。在这项工作中,我们为医学图像分解提供了新型的变异式对抗变异器。首先,我们利用金字塔结构构建多尺度表达和处理多尺度变异。然后,我们设计了一个新的类变异变异器模块,以更好地了解带有语义结构的物体的歧视性区域。最后,我们使用一种对抗性培训战略,提高分解的准确度,并相应地允许基于变异性分析器的模型,以捕捉高层次的定义关联内容和低层次的解剖特性。我们进行的实验显示,CA-GANFORT 大幅超越了多尺度显示多级显示多级显示多级的医学图像显示和多级模型的多级显示多级显示多级显示性模型;G-G-GRAD-G-LANS-BBBBBSBSBSBSBSBSBSBSBSBSBSBSBSBS 将进一步提供一种更精确的精确模型。