Fully Convolutional Neural Networks (FCNNs) with contracting and expanding paths have shown prominence for the majority of medical image segmentation applications since the past decade. In FCNNs, the encoder plays an integral role by learning both global and local features and contextual representations which can be utilized for semantic output prediction by the decoder. Despite their success, the locality of convolutional layers in FCNNs, limits the capability of learning long-range spatial dependencies. Inspired by the recent success of transformers for Natural Language Processing (NLP) in long-range sequence learning, we reformulate the task of volumetric (3D) medical image segmentation as a sequence-to-sequence prediction problem. We introduce a novel architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information, while also following the successful "U-shaped" network design for the encoder and decoder. The transformer encoder is directly connected to a decoder via skip connections at different resolutions to compute the final semantic segmentation output. We have validated the performance of our method on the Multi Atlas Labeling Beyond The Cranial Vault (BTCV) dataset for multi-organ segmentation and the Medical Segmentation Decathlon (MSD) dataset for brain tumor and spleen segmentation tasks. Our benchmarks demonstrate new state-of-the-art performance on the BTCV leaderboard. Code: https://monai.io/research/unetr
翻译:在FCNNs中,编码器通过学习全球和地方特点及背景表达方式发挥了不可或缺的作用,这些特征和表达方式可以用来由解码器进行语义输出预测。尽管它们取得了成功,但是在FCNNs中共变层的位置限制了学习远程空间依赖的能力。在自然语言处理变压器(NLP)在远程序列学习中最近的成功激励下,我们重新配置了体积(3D)医学图像分割任务,作为序列到序列的预测问题。我们引入了一个新的结构,称为UNEt TRants(UNET TR),它利用变压器来学习输入量的序列表达方式,并有效获取全球多尺度信息。在为编码器和解码器成功设计“U型”网络之后,我们重新配置体积(3D) 体积(3D) 体积图解图解图解(Ccoder) 直接连接到我们解算码/解算器中的数据。