The growing availability of data and computing power fuels the development of predictive models. In order to ensure the safe and effective functioning of such models, we need methods for exploration, debugging, and validation. New methods and tools for this purpose are being developed within the eXplainable Artificial Intelligence (XAI) subdomain of machine learning. In this work (1) we present the taxonomy of methods for model explanations, (2) we identify and compare 27 packages available in R to perform XAI analysis, (3) we present an example of an application of particular packages, (4) we acknowledge recent trends in XAI. The article is primarily devoted to the tools available in R, but since it is easy to integrate the Python code, we will also show examples for the most popular libraries from Python.
翻译:越来越多的数据和计算能力促进了预测模型的开发。为了确保这些模型的安全和有效的运作,我们需要探索、调试和验证方法。正在机器学习的可氧化人工智能子领域为此开发新的方法和工具。在这项工作中:(1) 我们介绍了模型解释方法的分类,(2) 我们查明并比较了R中27套用于进行 XAI分析的软件包,(3) 我们举了一个应用特定软件包的例子,(4) 我们承认XAI中的最新趋势。文章主要针对R中的现有工具,但由于很容易纳入Python代码,我们还将为Python最受欢迎的图书馆提供实例。