One way to achieve eXplainable artificial intelligence (XAI) is through the use of post-hoc analysis methods. In particular, methods that generate heatmaps have been used to explain black-box models, such as deep neural network. In some cases, heatmaps are appealing due to the intuitive and visual ways to understand them. However, quantitative analysis that demonstrates the actual potential of heatmaps have been lacking, and comparison between different methods are not standardized as well. In this paper, we introduce a synthetic dataset that can be generated adhoc along with the ground-truth heatmaps for better quantitative assessment. Each sample data is an image of a cell with easily distinguishable features, facilitating a more transparent assessment of different XAI methods. Comparison and recommendations are made, shortcomings are clarified along with suggestions for future research directions to handle the finer details of select post-hoc analysis methods.
翻译:实现可移植人工智能(XAI)的方法之一是使用热后分析方法,特别是使用产生热映射的方法来解释黑盒模型,例如深神经网络;在某些情况下,热映射由于直观和直观的理解方法而具有吸引力;然而,缺乏定量分析来证明热映射的实际潜力,不同方法之间的比较也没有标准化;在本文中,我们引入了合成数据集,该数据集可以与地面真真象热映射一起生成,以便进行更好的定量评估;每个样本数据都是具有易于辨别特征的细胞图像,便于对不同的XAI方法进行更加透明的评估;比较和建议,并澄清了缺点,同时提出了处理选定热映射分析方法精细细节的未来研究方向建议。