Magnetic Resonance Imaging (MRI) is an important medical imaging modality, while it requires a long acquisition time. To reduce the acquisition time, various methods have been proposed. However, these methods failed to reconstruct images with a clear structure for two main reasons. Firstly, similar patches widely exist in MR images, while most previous deep learning-based methods ignore this property and only adopt CNN to learn local information. Secondly, the existing methods only use clear images to constrain the upper bound of the solution space, while the lower bound is not constrained, so that a better parameter of the network cannot be obtained. To address these problems, we propose a Contrastive Learning for Local and Global Learning MRI Reconstruction Network (CLGNet). Specifically, according to the Fourier theory, each value in the Fourier domain is calculated from all the values in Spatial domain. Therefore, we propose a Spatial and Fourier Layer (SFL) to simultaneously learn the local and global information in Spatial and Fourier domains. Moreover, compared with self-attention and transformer, the SFL has a stronger learning ability and can achieve better performance in less time. Based on the SFL, we design a Spatial and Fourier Residual block as the main component of our model. Meanwhile, to constrain the lower bound and upper bound of the solution space, we introduce contrastive learning, which can pull the result closer to the clear image and push the result further away from the undersampled image. Extensive experimental results on different datasets and acceleration rates demonstrate that the proposed CLGNet achieves new state-of-the-art results.
翻译:磁共振成像( MRI) 是一个重要的医学成像模式, 虽然它需要很长的获取时间。 为了缩短获取时间, 提出了各种方法 。 但是, 这些方法未能以清晰的结构重建图像, 有两个主要原因 。 首先, MR 图像中广泛存在相似的补丁, 而大多数以前深层次的基于学习的方法都忽略了这一属性, 并且只采用CNN 来学习本地信息 。 其次, 现有方法只使用清晰的图像来限制解决方案空间的上层, 而下层不受约束, 这样网络的参数就无法获得更好的。 为了解决这些问题, 我们建议为本地和全球的快速学习 MRI 重建网络网( CLGNet) 进行对比学习。 具体地说, 根据 Fourier 理论, Fourier 域的每个值都从空间域的所有值中计算出来。 因此, 我们建议一个空间和 Fourier 的图层( SFreyL) 来同时学习空间和四层域的本地和全球信息。 此外, SLLL 与自我保存和变压变换器相比, 更强大的学习能力更强, 可以在更短的时间里更短的模型下取得更好的业绩。 在更接近的模型中, 我们的模型中, 我们设计一个更接近的磁带的磁力的模型中, 我们设计了一个更深层的磁带的磁带的磁带的图像, 我们的磁带的磁带的模型, 我们设计到更深的磁带的模型, 将更深的模型, 将更深的磁带的模型, 将更精确的模型, 我们设计到更接近的磁带的磁带的图像到更精确到更精确到更精确的图像到更接近到更精确的模型, 。