Objective: To improve accelerated MRI reconstruction through a densely connected cascading deep learning reconstruction framework. Materials and Methods: A cascading deep learning reconstruction framework (baseline model) was modified by applying three architectural modifications: Input-level dense connections between cascade inputs and outputs, an improved deep learning sub-network, and long-range skip-connections between subsequent deep learning networks. An ablation study was performed, where five model configurations were trained on the NYU fastMRI neuro dataset with an end-to-end scheme conjunct on four- and eight-fold acceleration. The trained models were evaluated by comparing their respective structural similarity index measure (SSIM), normalized mean square error (NMSE) and peak signal to noise ratio (PSNR). Results: The proposed densely interconnected residual cascading network (DIRCN), utilizing all three suggested modifications, achieved a SSIM improvement of 8% and 11% for four- and eight-fold acceleration, respectively. For eight-fold acceleration, the model achieved a 23% decrease in the NMSE when compared to the baseline model. In an ablation study, the individual architectural modifications all contributed to this improvement, by reducing the SSIM and NMSE with approximately 3% and 5% for four-fold acceleration, respectively. Conclusion: The proposed architectural modifications allow for simple adjustments on an already existing cascading framework to further improve the resulting reconstructions.
翻译:目标:通过一个紧密相连的深层学习重建框架来改善加速的磁共振再重建; 材料和方法:一个连续的深层学习重建框架(基线模型)通过应用三个建筑修改进行了修改: 级联投入和产出之间的投入层密集连接、一个更好的深层学习子网络和随后深层学习网络之间的长距离跳跃连接; 进行了一项减缩研究,对五个模型配置进行了关于NYU 快速磁共振神经数据集的培训,其端对端计划结合了四倍和八倍加速。 通过比较各自的结构相似指数(SSIM)、正常平均平方差(NMSE)和噪音比率峰值信号(PSNR),对经过培训的模型进行了评估。结果:拟议的密集相连的残余级联动网络(DIRCN),利用所有三项建议修改,分别实现了NSSIM改进8%和11%,加速四倍和八倍加速。关于八倍加速的八倍计划,模型比基线模型减少了23% NMSE。在一项基线模型中,通过比较比较,对各自的结构相似性差差平方差(NMS)的平方平方差的平方差的平方值调整,从而分别将调整了SIM的建筑升级为目前的建筑结构结构结构结构结构调整。