In this work, we introduce a fast and accurate method for unsupervised 3D medical image registration. This work is built on top of a recent algorithm SAM, which is capable of computing dense anatomical/semantic correspondences between two images at the pixel level. Our method is named SAME, which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration. Using SAM embeddings, we enhance these steps by finding more coherent correspondences, and providing features and a loss function with better semantic guidance. We collect a multi-phase chest computed tomography dataset with 35 annotated organs for each patient and conduct inter-subject registration for quantitative evaluation. Results show that SAME outperforms widely-used traditional registration techniques (Elastix FFD, ANTs SyN) and learning based VoxelMorph method by at least 4.7% and 2.7% in Dice scores for two separate tasks of within-contrast-phase and across-contrast-phase registration, respectively. SAME achieves the comparable performance to the best traditional registration method, DEEDS (from our evaluation), while being orders of magnitude faster (from 45 seconds to 1.2 seconds).
翻译:在这项工作中,我们引入了不受监督的 3D 医学图像注册的快速和准确方法。 这项工作建于最近的 SAM 算法之上, 这个算法能够计算出两个像素级图像之间的密集解剖/ 语义对应。 我们的方法叫SAME, 它把图像注册分成三个步骤: 毛形变形、 粗变形和深度变形。 我们使用 SAM 嵌入, 找到更一致的通信, 提供特征和损失函数, 并有更好的语义指导, 来强化这些步骤。 我们收集一个多阶段胸部计算成像数据集, 配有每个病人的35个附加说明的器官, 并进行不同对象间定量评估登记。 结果显示, SAM 超越了广泛使用的传统注册技术( Elastix FFD, ANTs SyN), 以及学习基于VoxelMorph 的方法, 其使用率至少4. 7 % 和 2. 7 % 和 Dice 分, 用于两种不同的连续阶段和跨相位阶段的单独任务。 我们的登记, SAME 分别实现了从45 秒到最快速的排序。