In the last decade, convolutional neural networks (ConvNets) have dominated the field of medical image analysis. However, it is found that the performances of ConvNets may still be limited by their inability to model long-range spatial relations between voxels in an image. Numerous vision Transformers have been proposed recently to address the shortcomings of ConvNets, demonstrating state-of-the-art performances in many medical imaging applications. Transformers may be a strong candidate for image registration because their self-attention mechanism enables a more precise comprehension of the spatial correspondence between moving and fixed images. In this paper, we present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration. We also introduce three variants of TransMorph, with two diffeomorphic variants ensuring the topology-preserving deformations and a Bayesian variant producing a well-calibrated registration uncertainty estimate. The proposed models are extensively validated against a variety of existing registration methods and Transformer architectures using volumetric medical images from two applications: inter-patient brain MRI registration and phantom-to-CT registration. Qualitative and quantitative results demonstrate that TransMorph and its variants lead to a substantial performance improvement over the baseline methods, demonstrating the effectiveness of Transformers for medical image registration.
翻译:过去十年来,神经神经网络(Conval neal Nets)在医学图像分析领域占主导地位,然而,发现ConvNets的性能可能仍然受到限制,因为它们无法在图像中模拟 voxels 之间的远距离空间关系。最近提出了许多视觉变异器,以解决ConvNets的缺点,在许多医学成像应用中展示了最先进的性能。变异器可能是图像登记的强有力候选方,因为它们的自我注意机制能够更准确地理解移动和固定图像之间的空间通信。在本文件中,我们介绍了TransMorph,一个用于体积医学图像登记的混合变异器-ConvNet模型。我们还引入了 TransMorph的三个变异器,其中有两个变异变器确保了表层-保留变形和Bayesian变异体的功能,从而产生了一个精确的登记登记不确定性估计值。拟议模型被广泛验证为现有各种登记方法和变异体形结构,使用两个应用的体积医学图像:间大脑MRI和幻影-Condom-Contal-Credustrual registrationalal-dal registrational registrationalstal-dal registrislational registration