Explainable Artificial Intelligence - XAI is aimed at studying and developing techniques to explain black box models, that is, models that provide limited self-explanation of their predictions. In recent years, XAI researchers have been formalizing proposals and developing new measures to explain how these models make specific predictions. In previous studies, evidence has been found on how model (dataset and algorithm) complexity affects global explanations generated by XAI measures Ciu, Dalex, Eli5, Lofo, Shap and Skater, suggesting that there is room for the development of a new XAI measure that builds on the complexity of the model. Thus, this research proposes a measure called Explainable based on Item Response Theory - eXirt, which is capable of explaining tree-ensemble models by using the properties of Item Response Theory (IRT). For this purpose, a benchmark was created using 40 different datasets and 2 different algorithms (Random Forest and Gradient Boosting), thus generating 6 different explainability ranks using known XAI measures along with 1 data purity rank and 1 rank of the measure eXirt, amounting to 8 global ranks for each model, i.e., 640 ranks altogether. The results show that eXirt displayed different ranks than those of the other measures, which demonstrates that the advocated methodology generates global explanations of tree-ensemble models that have not yet been explored, either for the more difficult models to explain or even the easier ones.
翻译:可解释的人工智能 - XAI 旨在研究并开发解释黑盒模型的技术,即提供有限的预测自我解析模型的模型。近年来, XAI 研究人员一直在将建议正规化并制定新的措施,以解释这些模型如何作出具体预测。在以往的研究中,已经发现关于模型(数据集和算法)复杂性如何影响由XAI 措施Ciu、Dalex、Eli5、Lofo、Shap和Skater产生的全球解释的证据,表明在模型复杂程度的基础上制定新的 XAI 措施,即提供有限自我解析的模型。因此,这项研究提出了一种称为基于项目反应理论 - eXirt 的解释性措施的措施,它能够通过使用项目反应理论的特性解释树类模型(IRT ) 解释树类模型(数据集和算法) 。 为此,利用40个不同的数据集和2种不同的算法(兰多森林和重力推波推),从而产生6种不同的可解释性等级,使用已知的XAI 措施,以及1个数据纯级和1级eX 度测量度测量度测量度的测量度尺度,相当于8级的模型, 展示了每个模型的推算方法。