Deep learning-based image reconstruction approaches have demonstrated impressive empirical performance in many imaging modalities. These approaches usually require a large amount of high-quality paired training data, which is often not available in medical imaging. To circumvent this issue we develop a novel unsupervised knowledge-transfer paradigm for learned reconstruction within a Bayesian framework. The proposed approach learns a reconstruction network in two phases. The first phase trains a reconstruction network with a set of ordered pairs comprising of ground truth images of ellipses and the corresponding simulated measurement data. The second phase fine-tunes the pretrained network to more realistic measurement data without supervision. By construction, the framework is capable of delivering predictive uncertainty information over the reconstructed image. We present extensive experimental results on low-dose and sparse-view computed tomography showing that the approach is competitive with several state-of-the-art supervised and unsupervised reconstruction techniques. Moreover, for test data distributed differently from the training data, the proposed framework can significantly improve reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM, when compared with learned methods trained on the synthetic dataset only.
翻译:深入学习的图像重建方法在许多成像模式中表现出了令人印象深刻的经验性表现,这些方法通常需要大量高质量的配对培训数据,而医学成像往往无法提供这些数据。为绕过这个问题,我们为在巴伊西亚框架内的学术重建开发了一种新的不受监督的知识转移模式。拟议方法分两个阶段学习重建网络。第一阶段用一套由椭圆的地面真实图像和相应的模拟测量数据组成的定购配对来培训重建网络。第二阶段对预先训练的网络进行微调,使其无需监督就更切合实际的测量数据。通过建设,该框架能够对重建后的图像提供预测性不确定信息。我们介绍了关于低剂量和稀有的计算图象的广泛实验结果,表明该方法与若干受监督的、不受监督的状态重建技术具有竞争力。此外,对于与培训数据不同的测试数据,拟议框架不仅可以提高重建质量,而且可以量化地提高PSNR和SSIM的重建质量。与仅接受过合成数据集培训的学习方法相比,我们提出了大量关于低剂量和低视力的实验结果。