Multi-modality medical images can provide relevant and complementary anatomical information for a target (organ, tumor or tissue). Registering the multi-modality images to a common space can fuse these comprehensive information, and bring convenience for clinical application. Recently, neural networks have been widely investigated to boost registration methods. However, it is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training. Besides, most existing registration networks mainly focus on pairwise registration, and can hardly be applicable for multiple image scenarios. In this work, we propose a multi-modality registration network (MMRegNet), which can jointly register multiple images with different modalities to a target image. Meanwhile, we present spatially encoded gradient information to train the MMRegNet in an unsupervised manner. The proposed network was evaluated on two datasets, i.e, MM-WHS 2017 and CHAOS 2019. The results show that the proposed network can achieve promising performance for cardiac left ventricle and liver registration tasks. Source code is released publicly on github.
翻译:多式医学图像可为目标(器官、肿瘤或组织)提供相关和互补的解剖信息。将多式图像登记到共同空间可以整合这些全面信息,并为临床应用带来便利。最近,对神经网络进行了广泛调查,以促进注册方法;然而,由于缺乏可靠的网络培训标准,开发多式登记网络仍是一项挑战。此外,大多数现有注册网络主要侧重于双向登记,无法适用于多个图像情景。在这项工作中,我们提议建立一个多式登记网络(MMRegNet),可以以不同的方式将多个图像联合登记到目标图像上。同时,我们提供空间编码的梯度信息,以不受监督的方式培训MMMRegNet。对拟议的网络进行了两个数据集(即MM-WHS 2017和CHAOS 2019)的评估。结果显示,拟议的网络可以在心脏左心室和肝脏登记任务上取得有希望的性能。源代码在Githhub上公开发布。