This paper explores the use of self-supervised deep learning in medical imaging in cases where two scan modalities are available for the same subject. Specifically, we use a large publicly-available dataset of over 20,000 subjects from the UK Biobank with both whole body Dixon technique magnetic resonance (MR) scans and also dual-energy x-ray absorptiometry (DXA) scans. We make three contributions: (i) We introduce a multi-modal image-matching contrastive framework, that is able to learn to match different-modality scans of the same subject with high accuracy. (ii) Without any adaption, we show that the correspondences learnt during this contrastive training step can be used to perform automatic cross-modal scan registration in a completely unsupervised manner. (iii) Finally, we use these registrations to transfer segmentation maps from the DXA scans to the MR scans where they are used to train a network to segment anatomical regions without requiring ground-truth MR examples. To aid further research, our code will be made publicly available.
翻译:本文探索了在医学成像中采用自我监督的深层学习方法,如果同一主题有两种扫描方式,则可以使用医学成像中自我监督的深层学习。具体地说,我们使用从英国生物库获得的20,000多个科目的大型公开数据集,同时使用整个身体的狄克逊技术磁共振扫描(MR)和双能X射线吸收测量(DXA)扫描(DXA)扫描。我们作出三项贡献:(一) 我们引入一个多模式图像匹配对比框架,能够学习如何与同一主题的不同时态扫描相匹配,而无需高精确度。 (二) 如果不作任何调整,我们展示在这种对比培训步骤期间所学到的函文可用于以完全不受监督的方式进行自动的跨式扫描登记。 (三) 最后,我们使用这些注册将分解图从DXA扫描转到MR扫描(M)中,用于对网络进行分段的训练,而不需要地面图理MR示例。为了帮助进一步研究,我们的代码将被公开。