This paper quantifies the quality of heatmap-based eXplainable AI (XAI) methods w.r.t image classification problem. Here, a heatmap is considered desirable if it improves the probability of predicting the correct classes. Different XAI heatmap-based methods are empirically shown to improve classification confidence to different extents depending on the datasets, e.g. Saliency works best on ImageNet and Deconvolution on Chest X-Ray Pneumonia dataset. The novelty includes a new gap distribution that shows a stark difference between correct and wrong predictions. Finally, the generative augmentative explanation is introduced, a method to generate heatmaps capable of improving predictive confidence to a high level.
翻译:本文量化了基于热映射的 EXAI (XAI) 方法的质量( w.r.t) 图像分类问题。 这里, 如果热映射提高了正确等级的预测概率, 则被认为是可取的。 基于不同 XAI 的热映射方法在经验上表明可以根据数据集的不同程度提高分类信任度, 例如图像网的精致性工作, 以及Chest X- Ray Pneomonia 数据集的进化演进。 新颖性包括一个新的差距分布, 显示正确和错误的预测之间的明显差异。 最后, 引入了基因强化解释, 一种能够提高高水平预测可信度的热映射图生成方法 。