Clinical routine and retrospective cohorts commonly include multi-parametric Magnetic Resonance Imaging; however, they are mostly acquired in different anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints. Thus acquired views suffer from poor out-of-plane resolution and affect downstream volumetric image analysis that typically requires isotropic 3D scans. Combining different views of multi-contrast scans into high-resolution isotropic 3D scans is challenging due to the lack of a large training cohort, which calls for a subject-specific framework.This work proposes a novel solution to this problem leveraging Implicit Neural Representations (INR). Our proposed INR jointly learns two different contrasts of complementary views in a continuous spatial function and benefits from exchanging anatomical information between them. Trained within minutes on a single commodity GPU, our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets. Using Mutual Information (MI) as a metric, we find that our model converges to an optimum MI amongst sequences, achieving anatomically faithful reconstruction. Code is available at: https://github.com/jqmcginnis/multi_contrast_inr.
翻译:临床常规和回顾性研究队列通常包括多参数磁共振成像;然而,由于信噪比和扫描时间限制,它们通常以不同的各向异性2D视图获取。因此,获取的视图在平面外分辨率方面存在问题并影响下游体积图像分析,通常需要等向性的3D扫描。将多个对比度扫描的不同视图组合成高分辨率的等向性3D扫描是具有挑战性的,因为缺乏大型训练队列,这需要一个特定于主体的框架。本文提出了一种利用隐式神经表示(INR)解决这个问题的新方法。我们提出的INR在连续空间函数中联合学习两种不同对比度的互补视图,并从它们之间交换解剖信息中受益。在单个商品GPU上训练几分钟后,我们的模型在我们的三个数据集的不同对比度对之间提供了逼真的超分辨率。使用互信息(MI)作为指标,我们发现我们的模型在序列之间收敛于最佳MI,实现了解剖学上逼真的重建。代码位于:https://github.com/jqmcginnis/multi_contrast_inr。