Deep Neural Networks (DNNs) are widely used for decision making in a myriad of critical applications, ranging from medical to societal and even judicial. Given the importance of these decisions, it is crucial for us to be able to interpret these models. We introduce a new method for interpreting image segmentation models by learning regions of images in which noise can be applied without hindering downstream model performance. We apply this method to segmentation of the pancreas in CT scans, and qualitatively compare the quality of the method to existing explainability techniques, such as Grad-CAM and occlusion sensitivity. Additionally we show that, unlike other methods, our interpretability model can be quantitatively evaluated based on the downstream performance over obscured images.
翻译:深神经网络(DNN)被广泛用于从医学到社会甚至司法等一系列关键应用的决策。鉴于这些决定的重要性,我们必须能够解释这些模型。我们引入了一种新的方法,通过学习可以使用噪音的图像区域来解释图像分化模型,而不会妨碍下游模型的性能。我们用这种方法来在CT扫描中分割胰腺,并从质量上将该方法的质量与现有的可解释性技术进行比较,例如Grad-CAM和Clocision敏感性。此外,我们表明,与其他方法不同,我们的可解释性模型可以根据下游对模糊图像的性能进行定量评估。