Magnetic resonance (MR) imaging is a commonly used scanning technique for disease detection, diagnosis and treatment monitoring. Although it is able to produce detailed images of organs and tissues with better contrast, it suffers from a long acquisition time, which makes the image quality vulnerable to say motion artifacts. Recently, many approaches have been developed to reconstruct full-sampled images from partially observed measurements in order to accelerate MR imaging. However, most of these efforts focus on reconstruction over a single modality or simple fusion of multiple modalities, neglecting the discovery of correlation knowledge at different feature level. In this work, we propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality, with which to hierarchically guide the reconstruction of a given target modality. In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network. Then, a guided attention module is introduced in each convolutional stage to selectively aggregate multi-modal features for better reconstruction, yielding comprehensive, multi-scale, multi-modal feature fusion. Moreover, our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain as well as restore the image details from the image domain. Extensive experiments demonstrate the superiority of the proposed approach over state-of-the-art MR image reconstruction methods.
翻译:磁共振成像是一种常见的疾病检测、诊断和治疗监测常用的扫描技术,它能够产生器官和组织的详细图像,其对比性较好,但它却受到长期获取时间的影响,使图像质量易受运动文物的伤害。最近,已经制定了许多方法,从部分观察的测量中重建完整抽样图像,以加速MR成像。然而,这些努力大多侧重于单一模式的重建或多种模式的简单融合,忽视在不同特征层面发现相关知识。在这项工作中,我们提议建立一个名为MANet的新型多模式聚合网络,它能够从完全抽样的辅助模式中发现互补的表示,从而以分级的方式指导某一目标模式的重建。在我们曼特,通过一个特定网络独立地学习了完全抽样的辅助和未充分抽样的目标模式的表示。然后,在每一个革命阶段引入一个指导关注模块,选择综合的多模式特征,以更好地重建,产生全面、多尺度、多模式的特征融合。此外,曼诺公司可以同时从一个域域域的图像恢复频率,从而恢复一个域域域的图像恢复。