Trusting the predictions of deep learning models in safety critical settings such as the medical domain is still not a viable option. Distentangled uncertainty quantification in the field of medical imaging has received little attention. In this paper, we study disentangled uncertainties in image to image translation tasks in the medical domain. We compare multiple uncertainty quantification methods, namely Ensembles, Flipout, Dropout, and DropConnect, while using CycleGAN to convert T1-weighted brain MRI scans to T2-weighted brain MRI scans. We further evaluate uncertainty behavior in the presence of out of distribution data (Brain CT and RGB Face Images), showing that epistemic uncertainty can be used to detect out of distribution inputs, which should increase reliability of model outputs.
翻译:相信安全关键环境(如医疗领域)中深层学习模型的预测仍不可行。 医学成像领域分解的不确定性量化很少受到关注。 在本文中, 我们研究了图像中与医学领域图像翻译任务分解的不确定性。 我们比较了多种不确定性量化方法, 即 Ensembles、 Flippout、 辍学和 DroppConnect, 同时使用 CypeGAN 将 T1 加权脑MRI 扫描转换为 T2- 加权脑MRI 扫描。 我们进一步评估了在分布数据( Brain CT 和 RGB Face 图像) 存在时的不确定性行为, 这表明, 隐含不确定性可用于检测分布投入, 这会提高模型输出的可靠性 。