Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initiatives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hypothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.
翻译:使用机器学习分类算法的智能系统在日常生活中越来越常见。 但是,许多系统使用黑盒模型,这些模型没有能够进行自我推算预测的特性。 这种情况导致实地和社会的研究人员提出以下问题:我如何相信对一个我无法理解的模型的预测? 从这个意义上讲,XAI是AI的一个领域,目的是创造能够向终端用户解释分类师决定的技术。结果,出现了一些技术,如解释逐例分析,它有一些由目前与XAI合作的社区整合的举措。这项研究探索了“项目反应理论”作为解释模型和衡量逐项解释方法可靠性的工具。为此,使用了4个复杂程度不同的数据集,随机森林模型作为假设测试。从测试集中,83.8%的错误来自IRT指出模型不可靠的实例。