We propose a novel transformer model, capable of segmenting medical images of varying modalities. Challenges posed by the fine grained nature of medical image analysis mean that the adaptation of the transformer for their analysis is still at nascent stages. The overwhelming success of the UNet lay in its ability to appreciate the fine-grained nature of the segmentation task, an ability which existing transformer based models do not currently posses. To address this shortcoming, we propose The Fully Convolutional Transformer (FCT), which builds on the proven ability of Convolutional Neural Networks to learn effective image representations, and combines them with the ability of Transformers to effectively capture long-term dependencies in its inputs. The FCT is the first fully convolutional Transformer model in medical imaging literature. It processes its input in two stages, where first, it learns to extract long range semantic dependencies from the input image, and then learns to capture hierarchical global attributes from the features. FCT is compact, accurate and robust. Our results show that it outperforms all existing transformer architectures by large margins across multiple medical image segmentation datasets of varying data modalities without the need for any pre-training. FCT outperforms its immediate competitor on the ACDC dataset by 1.3%, on the Synapse dataset by 4.4%, on the Spleen dataset by 1.2% and on ISIC 2017 dataset by 1.1% on the dice metric, with up to five times fewer parameters. Our code, environments and models will be available via GitHub.
翻译:我们提出了一个新型变压器模型,能够分割不同模式的医疗图像。医学图像分析的细粒性质所带来的挑战意味着变压器用于分析的调整仍处于初级阶段。UNet的压倒性成功在于它能够欣赏分解任务的细细细性质,而现有的变压器模型目前并不拥有这种能力。为了解决这一缺陷,我们建议“完全革命变压器(FCT)”,它以进化神经环境网络已证明的能力为基础,学习有效的图像显示,并结合它们与变压器在其投入中有效捕捉长期依赖性的能力。UNet的压倒性成功在于它是否有能力欣赏分解任务的细细细细性质。Unitet成功在于它能够欣赏分解任务的细微性质,而现有的变压式变压器(FCT), 也就是它能学会从输入图像图像中提取长范围的静态定值,然后学习从特征中获取等级性全球属性。 FCT是紧凑、准确和坚固的。我们的结果显示,它超越了所有现有的变压结构结构结构,在多种医学图像断流数据段的宽度上有很大的边缘,A-liC立即数据模式需要通过Slaticregaltraction数据。