Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means ought to be rationally adopted to achieve different epistemic ends. Applied to XAI, different topics, stakeholders, and goals thus require different instruments. I call this the means-end account of XAI. The means-end account has a descriptive and a normative component: on the one hand, I show how the specific means-end relations give rise to a taxonomy of existing contributions to the field of XAI; on the other hand, I argue that the suitability of XAI methods can be assessed by analyzing whether they are prescribed by a given topic, stakeholder, and goal.
翻译:可解释的人工智能(XAI)试图为那些被认为不透明的机器学习方法提供解释。然而,对于这意味着什么以及如何实现这些方法,存在着相当大的分歧。作者们对应该解释什么(专题)、应该向谁解释什么(利益攸关方)、应该向谁解释什么(文书)和为什么应该解释什么(目标)有不同意见。在本文件中,我从手段-端认知学中利用洞察力来构建这个领域。根据手段-端认知学,应该合理地采用不同手段来实现不同的认知目的。因此,对XAI、不同的专题、利益攸关方和目标应用不同的工具。我称之为XAI的手段-端账户。手段-端账户有一个描述和规范部分:一方面,我展示了具体手段-端关系如何产生对XAI领域现有贡献的分类;另一方面,我说,可以通过分析某个专题、利益攸关方和目标的描述来评估XAI方法的适宜性。