Image-to-image translation is an ill-posed problem as unique one-to-one mapping may not exist between the source and target images. Learning-based methods proposed in this context often evaluate the performance on test data that is similar to the training data, which may be impractical. This demands robust methods that can quantify uncertainty in the prediction for making informed decisions, especially for critical areas such as medical imaging. Recent works that employ conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings between the source and the target images. However, these methods do not focus on (i)~robustness of the models to out-of-distribution (OOD)-noisy data and (ii)~uncertainty quantification. This paper proposes a GAN-based framework that (i)~models an adaptive loss function for robustness to OOD-noisy data that automatically tunes the spatially varying norm for penalizing the residuals and (ii)~estimates the per-voxel uncertainty in the predictions. We demonstrate our method on two key applications in medical imaging: (i)~undersampled magnetic resonance imaging (MRI) reconstruction (ii)~MRI modality propagation. Our experiments with two different real-world datasets show that the proposed method (i)~is robust to OOD-noisy test data and provides improved accuracy and (ii)~quantifies voxel-level uncertainty in the predictions.
翻译:图像到图像翻译是一个不恰当的问题,因为源和目标图像之间可能不存在独特的一到一的图像图象,因此,源与目标图像之间可能不存在独特的一到一的图像图象。在此背景下提出的基于学习的方法常常评估测试数据的性能,而测试数据与培训数据相似,而培训数据可能不切实际。这要求采取强有力的方法,对知情决策预测中的不确定性进行量化,特别是医学成像等关键领域的不确定性。最近采用有条件的基因化对抗网络(GANs)的工程显示,学习对源和目标图像之间光-现实图像到一的图像图象的性能有所改善。然而,这些方法并不侧重于(一) 模型在分布(OOOD)-噪音数据和(二) 不确定性分析模型显示我们的两个关键模型(磁力化模型) 模型显示(磁力化模型) 数据(磁力化模型) 显示两个关键模型(磁力化模型) 显示(磁力化模型) 和磁性模型(磁力化模型) 显示两个关键模型(磁性模型) 显示: