A particular class of Explainable AI (XAI) methods provide saliency maps to highlight part of the image a Convolutional Neural Network (CNN) model looks at to classify the image as a way to explain its working. These methods provide an intuitive way for users to understand predictions made by CNNs. Other than quantitative computational tests, the vast majority of evidence to highlight that the methods are valuable is anecdotal. Given that humans would be the end-users of such methods, we devise three human subject experiments through which we gauge the effectiveness of these saliency-based explainability methods.
翻译:一种特殊的可解释的AI(XAI)方法提供了突出的地图,以突出显示一个革命神经网络模型所研究的图像的一部分,将图像归类为解释其工作的一种方法。这些方法为用户理解CNN的预测提供了直觉方法。除了定量计算测试之外,绝大多数强调这些方法有价值的证据都是传闻。鉴于人类是这些方法的终端用户,我们设计了三种人类实验,通过这些实验我们衡量这些突出的可解释方法的有效性。