Deep Learning models are easily disturbed by variations in the input images that were not seen during training, resulting in unpredictable behaviours. Such Out-of-Distribution (OOD) images represent a significant challenge in the context of medical image analysis, where the range of possible abnormalities is extremely wide, including artifacts, unseen pathologies, or different imaging protocols. In this work, we evaluate various uncertainty frameworks to detect OOD inputs in the context of Multiple Sclerosis lesions segmentation. By implementing a comprehensive evaluation scheme including 14 sources of OOD of various nature and strength, we show that methods relying on the predictive uncertainty of binary segmentation models often fails in detecting outlying inputs. On the contrary, learning to segment anatomical labels alongside lesions highly improves the ability to detect OOD inputs.
翻译:深层学习模型很容易被培训期间未见的输入图象的变化所干扰,从而造成不可预测的行为。在医学图像分析方面,这种传播图象是一个重大挑战,其中可能存在的异常范围极广,包括人工制品、看不见的病理或不同的成像规程。在这项工作中,我们评估了各种不确定框架,以便在多分解性病变分解的情况下检测OOD的输入。通过实施一个包括14种不同性质和强度的OOOD来源的全面评估计划,我们发现,依靠二元分解模型预测不确定性的方法往往无法探测外向输入。相反,学习与损伤相伴的分解学标签会大大提高了检测OOD投入的能力。