Accelerated MRI aims to find a pair of samplers and reconstructors to reduce acquisition time while maintaining the reconstruction quality. Most of the existing works focus on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. Recently, people have begun to consider learning samplers and reconstructors jointly. In this paper, we propose an alternating training framework for finding a good pair of samplers and reconstructors via deep reinforcement learning (RL). In particular, we propose a novel sparse-reward Partially Observed Markov Decision Process (POMDP) to formulate the MRI sampling trajectory. Compared to the existing works that utilize dense-reward POMDPs, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage over dense-reward POMDPs. We evaluate our method on fastMRI, a public benchmark MRI dataset, and it achieves state-of-the-art reconstruction performances.
翻译:加速MRI旨在寻找一对采样器和再造器,以减少获取时间,同时保持重建质量。大多数现有工程侧重于找到拥有固定重整器的稀有采样器或找到拥有固定采样器的重建器。最近,人们开始联合考虑学习采样器和再造器。在本文件中,我们建议建立一个交替培训框架,以便通过深层强化学习(RL)找到一对优秀的采样器和再造器。特别是,我们提议建立一个新颖的稀疏所得部分观测到的Markov决定程序(POMDP),以制定MRI采样轨迹。与目前使用密集回收的POMDP的工程相比,拟议的稀有再现式POMDP在计算上效率更高,对密集再现式的POMDP具有显著优势。我们评估了我们关于快速MRI的方法,一个公共基准MRI数据集,并实现了最先进的重建业绩。