This paper proposes an alternative approach to the basic taxonomy of explanations produced by explainable artificial intelligence techniques. Methods of Explainable Artificial Intelligence (XAI) were developed to answer the question why a certain prediction or estimation was made, preferably in terms easy to understand by the human agent. XAI taxonomies proposed in the literature mainly concentrate their attention on distinguishing explanations with respect to involving the human agent, which makes it complicated to provide a more mathematical approach to distinguish and compare different explanations. This paper narrows its attention to the cases where the data set of interest belongs to $\mathbb{R} ^n$ and proposes a simple linear algebra-based taxonomy for local explanations.
翻译:本文件建议了一种替代方法,用于对可解释的人工智能技术所产生的解释进行基本分类; 开发了可解释的人工智能方法(XAI),以解答为什么作出某种预测或估计,最好是用人代理人容易理解的术语。 文献中提议的XAI分类主要将其注意力集中在对涉及人代理人的解释进行区分上,这就使得为区分和比较不同的解释提供一种更数学的方法变得复杂。 本文件将注意力缩小到数据集属于美元/马特布{R}n$的情况,并提议对当地解释采用简单的直线代数分类法。