The performance of convolutional neural networks has continued to improve over the last decade. At the same time, as model complexity grows, it becomes increasingly more difficult to explain model decisions. Such explanations may be of critical importance for reliable operation of human-machine pairing setups, or for model selection when the "best" model among many equally-accurate models must be established. Saliency maps represent one popular way of explaining model decisions by highlighting image regions models deem important when making a prediction. However, examining salience maps at scale is not practical. In this paper, we propose five novel methods of leveraging model salience to explain a model behavior at scale. These methods ask: (a) what is the average entropy for a model's salience maps, (b) how does model salience change when fed out-of-set samples, (c) how closely does model salience follow geometrical transformations, (d) what is the stability of model salience across independent training runs, and (e) how does model salience react to salience-guided image degradations. To assess the proposed measures on a concrete and topical problem, we conducted a series of experiments for the task of synthetic face detection with two types of models: those trained traditionally with cross-entropy loss, and those guided by human salience when training to increase model generalizability. These two types of models are characterized by different, interpretable properties of their salience maps, which allows for the evaluation of the correctness of the proposed measures. We offer source codes for each measure along with this paper.
翻译:Abstract: 卷积神经网络在过去十年中的性能一直在提高。同时,随着模型复杂度的增加,越来越难以解释模型的决策。这种解释对于可靠操作人机配对设置或在许多同等准确的模型中选择“最佳”模型时可能非常重要。显著性图是一种流行的解释模型决策的方式,它可以突出显示模型在进行预测时认为重要的图像区域。然而,大规模检查显著性图并不实际。在本文中,我们提出了五种利用模型显著性在大规模解释模型行为的新方法。这些方法包括:(a)模型显著性图的平均熵是多少;(b)当输入集外样本时,模型显著性会发生什么变化;(c)模型显著性如何跟随几何变换变化;(d)在独立训练运行之间,模型显著性的稳定性如何;(e)显著性引导图像降级时模型显著性会有何反应等。为了评估所提出的措施在具体而有趣的合成人脸检测任务上的表现,我们使用了两种模型类型:传统的交叉熵损失训练的模型和在训练时受人类显著性引导以提高模型的泛化性能的模型。这两种模型具有不同的显著性图解释属性,这使得对所提出措施的正确性进行了评估。本文提供了每个措施的源代码。