In medical imaging, most of the image registration methods implicitly assume a one-to-one correspondence between the source and target images (i.e., diffeomorphism). However, this is not necessarily the case when dealing with pathological medical images (e.g., presence of a tumor, lesion, etc.). To cope with this issue, the Metamorphosis model has been proposed. It modifies both the shape and the appearance of an image to deal with the geometrical and topological differences. However, the high computational time and load have hampered its applications so far. Here, we propose a deep residual learning implementation of Metamorphosis that drastically reduces the computational time at inference. Furthermore, we also show that the proposed framework can easily integrate prior knowledge of the localization of topological changes (e.g., segmentation masks) that can act as spatial regularization to correctly disentangle appearance and shape changes. We test our method on the BraTS 2021 dataset, showing that it outperforms current state-of-the-art methods in the alignment of images with brain tumors.
翻译:在医学成像中,大多数图像登记方法隐含地假定源和目标图像(即二光畸形)之间一对一的对应关系。然而,在处理病理医学图像(例如肿瘤的存在、损伤等)时,并不一定如此。为了解决这一问题,提出了“元变形模型”。它改变图像的形状和外观,处理几何和地形差异。然而,高计算时间和负荷阻碍了其应用。在这里,我们提议对元变形进行深层留级学习,以大幅缩短推断的计算时间。此外,我们还表明,拟议的框架可以很容易地将先前对地形变化(例如分解面罩)的本地化知识(例如分解面罩)整合在一起,从而起到空间调节作用,正确分解外观和形状变化的作用。我们用BRATS 2021数据集测试我们的方法,显示它超过了图像与脑肿瘤相匹配的当前状态方法。