Convolutional Neural Networks (CNNs) are highly effective for image reconstruction problems. Typically, CNNs are trained on large amounts of training images. Recently, however, un-trained CNNs such as the Deep Image Prior and Deep Decoder have achieved excellent performance for image reconstruction problems such as denoising and inpainting, \emph{without using any training data}. Motivated by this development, we address the reconstruction problem arising in accelerated MRI with un-trained neural networks. We propose a highly optimized un-trained recovery approach based on a variation of the Deep Decoder and show that it significantly outperforms other un-trained methods, in particular sparsity-based classical compressed sensing methods and naive applications of un-trained neural networks. We also compare performance (both in terms of reconstruction accuracy and computational cost) in an ideal setup for trained methods, specifically on the fastMRI dataset, where the training and test data come from the same distribution. We find that our un-trained algorithm achieves similar performance to a baseline trained neural network, but a state-of-the-art trained network outperforms the un-trained one. Finally, we perform a comparison on a non-ideal setup where the train and test distributions are slightly different, and find that our un-trained method achieves similar performance to a state-of-the-art accelerated MRI reconstruction method.
翻译:革命神经网络(CNNs)对于图像重建问题非常有效。 通常,CNN会接受大量培训图像的培训。 但是,最近,未受过培训的CNN(如深图像前端和深解码器)在图像重建问题上取得了卓越的成绩,如拆落和涂漆,\emph{没有使用任何培训数据。受此发展驱动,我们用未经培训的神经网络来解决加速MRI中出现的重建问题。我们提出一种高度优化的未经培训的恢复方法,基于深解码的变异,显示它大大优于其他未经培训的方法,特别是基于空气的古典压缩感测法和未经培训的神经网络的天真应用。我们还将业绩(在重建准确性和计算成本方面)与经过培训的方法的理想组合进行比较,特别是在快速MRI数据集中,培训和测试数据来自同样的分布。我们未经培训的算法取得了与经过培训的神经网络相似的类似性能,但是在经过培训的网络中,我们经过培训的州级比对不相同的网络进行类似的测试。