Although recent deep learning methods, especially generative models, have shown good performance in fast magnetic resonance imaging, there is still much room for improvement in high-dimensional generation. Considering that internal dimensions in score-based generative models have a critical impact on estimating the gradient of the data distribution, we present a new idea, low-rank tensor assisted k-space generative model (LR-KGM), for parallel imaging reconstruction. This means that we transform original prior information into high-dimensional prior information for learning. More specifically, the multi-channel data is constructed into a large Hankel matrix and the matrix is subsequently folded into tensor for prior learning. In the testing phase, the low-rank rotation strategy is utilized to impose low-rank constraints on tensor output of the generative network. Furthermore, we alternately use traditional generative iterations and low-rank high-dimensional tensor iterations for reconstruction. Experimental comparisons with the state-of-the-arts demonstrated that the proposed LR-KGM method achieved better performance.
翻译:虽然最近的深层学习方法,特别是基因模型,在快速磁共振成像方面表现良好,但在高维生成方面仍有很大的改进余地。考虑到基于分数的基因模型的内部层面对数据分布梯度的估计具有关键影响,我们提出了一个新想法,即低级高压辅助K-空间基因化模型(LR-KGM),用于平行成像重建。这意味着我们将原始的先前信息转化为高水平的先前学习信息。更具体地说,多通道数据被建在一个大型汉克尔矩阵中,随后将矩阵折叠成索尔,供先前学习使用。在测试阶段,低级轮用战略对基因网络的热量输出施加低级限制。此外,我们为重建另用传统的基因化迭代和低级高水平的高分辨率变相法。与最新技术的实验性比较表明,拟议的LR-KGM方法取得了更好的性能。