Multi-modality medical images can provide relevant or complementary information for a target (organ, tumor or tissue). Registering multi-modality images to a common space can fuse these comprehensive information, and bring convenience for clinical application. Recently, neural networks have been widely investigated to boost registration methods. However, it is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training. In this work, we propose a multi-modality registration network (MMRegNet), which can perform registration between multi-modality images. Meanwhile, we present spatially encoded gradient information to train MMRegNet in an unsupervised manner. The proposed network was evaluated on MM-WHS 2017. Results show that MMRegNet can achieve promising performance for left ventricle cardiac registration tasks. Meanwhile, to demonstrate the versatility of MMRegNet, we further evaluate the method with a liver dataset from CHAOS 2019. Source code will be released publicly\footnote{https://github.com/NanYoMy/mmregnet} once the manuscript is accepted.
翻译:多式医疗图像可以为目标(器官、肿瘤或组织)提供相关或补充信息。将多式图像登记到一个共同空间可以整合这些全面信息,并为临床应用带来便利。最近,对神经网络进行了广泛调查,以提升登记方法。然而,由于缺乏可靠的网络培训标准,开发多式登记网络仍是一项挑战。在这项工作中,我们提议建立一个多式登记网络(MMMRegNet),可以在多式图像之间进行登记。与此同时,我们提供空间编码的梯度信息,以不受监督的方式培训MMMREGNet。对拟议的网络进行了评价,在MM-WHS-2017上进行了评估。结果显示MMMregNet可以在左式心脏登记任务上取得有希望的性能。与此同时,为了展示MMREGNet的多功能,我们进一步评估了来自CHAOS 2019的肝脏数据集的方法。一旦接受手稿,将公开发布源代码 http://github.com/NanYoMy/mmrenet}。