In the last decade, convolutional neural networks (ConvNets) have been a major focus of research in medical image analysis. However, the performances of ConvNets may be limited by a lack of explicit consideration of the long-range spatial relationships in an image. Recently Vision Transformer architectures have been proposed to address the shortcomings of ConvNets and have produced state-of-the-art performances in many medical imaging applications. Transformers may be a strong candidate for image registration because their unlimited receptive field enables a more precise comprehension of the spatial correspondence between moving and fixed images. Here, we present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration. This paper also presents diffeomorphic and Bayesian variants of TransMorph: the diffeomorphic variants ensure the topology-preserving deformations, and the Bayesian variant produces a well-calibrated registration uncertainty estimate. We extensively validated the proposed models using 3D medical images from three applications: inter-patient and atlas-to-patient brain MRI registration and phantom-to-CT registration. The proposed models are evaluated in comparison to a variety of existing registration methods and Transformer architectures. Qualitative and quantitative results demonstrate that the proposed Transformer-based model leads to a substantial performance improvement over the baseline methods, confirming the effectiveness of Transformers for medical image registration.
翻译:在过去的十年中,神经神经网络(Conval neal nets)一直是医学图像分析研究的一个主要重点,然而,ConvNets的表现可能由于缺乏对图像中远距离空间关系的明确考虑而受到限制。最近提出了视觉变异结构,以解决ConvNets的缺陷,并在许多医学成像应用中产生了最先进的性能。变异体可能是图像登记的一个强有力的候选方,因为它们的无限可接受场能够更准确地理解移动和固定图像之间的空间对应。在这里,我们介绍了TranMorph,一个用于体积医学图像登记的混合变异器-ConvNet模型。本文还介绍了TransMorph的变异体和Bayesian变异体:变异体变体能结构确保了表上保存的畸形,而Bayesian变异体则产生了一个精确的登记不确定性估计。我们广泛验证了拟议的模型,使用了以下三种应用的3D医学图像:住院间和住院大脑之间的MRI-Cond-Cond-ConverNet模型,用于体积体积医学成型图像登记。在现有的变型登记和变式注册中,对现有的变式注册和变式结构进行实质性登记和变式登记结果的模型的拟议模型进行了评估。在现有的变式的模型和变现式登记和变式登记和变式登记和变式格式的模型的模型的模型的模型的模型的模型的模拟的模型的模拟的模拟的模拟的模拟的模拟。