Deep convolutional neural networks have proven their effectiveness, and have been acknowledged as the most dominant method for image classification. However, a severe drawback of deep convolutional neural networks is poor explainability. Unfortunately, in many real-world applications, users need to understand the rationale behind the predictions of deep convolutional neural networks when determining whether they should trust the predictions or not. To resolve this issue, a novel genetic algorithm-based method is proposed for the first time to automatically evolve local explanations that can assist users to assess the rationality of the predictions. Furthermore, the proposed method is model-agnostic, i.e., it can be utilised to explain any deep convolutional neural network models. In the experiments, ResNet is used as an example model to be explained, and the ImageNet dataset is selected as the benchmark dataset. DenseNet and MobileNet are further explained to demonstrate the model-agnostic characteristic of the proposed method. The evolved local explanations on four images, randomly selected from ImageNet, are presented, which show that the evolved local explanations are straightforward to be recognised by humans. Moreover, the evolved explanations can explain the predictions of deep convolutional neural networks on all four images very well by successfully capturing meaningful interpretable features of the sample images. Further analysis based on the 30 runs of the experiments exhibits that the evolved local explanations can also improve the probabilities/confidences of the deep convolutional neural network models in making the predictions. The proposed method can obtain local explanations within one minute, which is more than ten times faster than LIME (the state-of-the-art method).
翻译:深相神经网络已证明了其有效性,并被公认为是图像分类的最主要方法。然而,深相神经网络的严重缺陷是无法解释的。不幸的是,在许多现实世界应用程序中,用户在确定是否信任预测时,需要理解深相神经网络预测背后的理由。为了解决这个问题,首次提议了一种新的基于遗传算法的方法,以自动演变本地解释,帮助用户评估预测的合理性。此外,拟议的方法是模型 — — 即,可以用来解释任何深深相神经网络模型。不幸的是,在许多现实世界应用程序中,用户需要理解深相神经网络预测背后的理由,在确定他们是否应该信任预测时,需要理解深相神经网络预测的理由。DenseNet和MoveNet被进一步解释,以展示拟议方法的模型 — — 不可知性特征特征特征。从图像网络中随机选取的四种图像的演变本地解释,显示进化的本地解释是直径直的。此外,进化的本地解释方法也可以通过人类的深相近的网络进行直径直径直径直径直径直地解释。此外,进的模型的模型可以更深地解释,可以更深地解释。 30的模型的模型的模型可以更精确的模型可以更深地分析可以更深地解释。