Image reconstruction from undersampled k-space data plays an important role in accelerating the acquisition of MR data, and a lot of deep learning-based methods have been exploited recently. Despite the achieved inspiring results, the optimization of these methods commonly relies on the fully-sampled reference data, which are time-consuming and difficult to collect. To address this issue, we propose a novel self-supervised learning method. Specifically, during model optimization, two subsets are constructed by randomly selecting part of k-space data from the undersampled data and then fed into two parallel reconstruction networks to perform information recovery. Two reconstruction losses are defined on all the scanned data points to enhance the network's capability of recovering the frequency information. Meanwhile, to constrain the learned unscanned data points of the network, a difference loss is designed to enforce consistency between the two parallel networks. In this way, the reconstruction model can be properly trained with only the undersampled data. During the model evaluation, the undersampled data are treated as the inputs and either of the two trained networks is expected to reconstruct the high-quality results. The proposed method is flexible and can be employed in any existing deep learning-based method. The effectiveness of the method is evaluated on an open brain MRI dataset. Experimental results demonstrate that the proposed self-supervised method can achieve competitive reconstruction performance compared to the corresponding supervised learning method at high acceleration rates (4 and 8). The code is publicly available at \url{https://github.com/chenhu96/Self-Supervised-MRI-Reconstruction}.
翻译:从未加复印的 k- space 数据进行图像重建,在加速获取MR 数据方面发挥着重要作用,最近还利用了许多深层的学习方法。尽管取得了令人鼓舞的成果,但这些方法的优化通常依赖于完全抽样的参考数据,这些数据耗时费时,难以收集。为了解决这个问题,我们建议一种全新的自我监督学习方法。具体地说,在模型优化期间,通过随机从未加复印的数据中选择 k- space数据的一部分来构建两个子集,然后输入两个平行的自我重建网络,以进行信息恢复。所有扫描数据点都界定了两个重建损失,以提高网络恢复频率信息的能力。与此同时,为了限制所学的未经扫描的数据点,这些方法的优化通常取决于网络的未经收集的数据点。为了加强两个平行网络之间的一致性,我们建议了一个差异损失。在这种方式下,重建模型可以仅仅用未加取样的数据进行适当培训。在模型评估期间,所选取的数据是公开的,两个经过训练的网络中的输入,其中任何一个将用来重建高质量的结果。拟议的方法是灵活的,在任何深度学习过程中可以采用一种高级的实验方法。在任何深度学习方法上进行。