Despite the progress observed with model-agnostic explainable AI (XAI), it is the case that model-agnostic XAI can produce incorrect explanations. One alternative are the so-called formal approaches to XAI, that include PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. The computation of relevant features serves to trade off probabilistic precision for the number of features in an explanation. However, even for very simple classifiers, the complexity of computing sets of relevant features is prohibitive. This paper investigates the computation of relevant sets for Naive Bayes Classifiers (NBCs), and shows that, in practice, these are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained with NBCs.
翻译:尽管模型----不可知解释的AI(XAI)取得了进展,但模型----不可知的XAI(XAI)却可以得出不正确的解释,一种替代办法是所谓的XAI的正式方法,其中包括PI-explasations。不幸的是,PI-explasation也显示出重要的缺点,其中最明显的是其大小。相关特征的计算有助于交换解释中特征数量的概率精确性。然而,即使对于非常简单的分类者来说,相关特征的计算组的复杂性也令人望而却步。本文调查了对Nive Bayes分类器(NBCs)相关数据集的计算,并表明这些装置在实践中容易计算。此外,实验还证实,相关特征的简明组合可以通过NBCs获得。