As machine learning models grow more complex and their applications become more high-stakes, tools for explaining model predictions have become increasingly important. This has spurred a flurry of research in model explainability and has given rise to feature attribution methods such as LIME and SHAP. Despite their widespread use, evaluating and comparing different feature attribution methods remains challenging: evaluations ideally require human studies, and empirical evaluation metrics are often data-intensive or computationally prohibitive on real-world datasets. In this work, we address this issue by releasing XAI-Bench: a suite of synthetic datasets along with a library for benchmarking feature attribution algorithms. Unlike real-world datasets, synthetic datasets allow the efficient computation of conditional expected values that are needed to evaluate ground-truth Shapley values and other metrics. The synthetic datasets we release offer a wide variety of parameters that can be configured to simulate real-world data. We demonstrate the power of our library by benchmarking popular explainability techniques across several evaluation metrics and across a variety of settings. The versatility and efficiency of our library will help researchers bring their explainability methods from development to deployment. Our code is available at https://github.com/abacusai/xai-bench.
翻译:随着机器学习模式日益复杂,其应用也变得更加复杂,解释模型预测的工具变得越来越重要,这刺激了对模型解释性及SHAP等特性归属方法的研究,并引发了LIME和SHAP等特征归属方法。尽管广泛使用、评价和比较了不同的特性归属方法,但仍然具有挑战性:评价最理想地需要人类研究,经验评价指标往往在真实世界数据集中数据密集或计算上令人望而却步。在这项工作中,我们通过发布XAI-Bench(一套合成数据集,连同一个用于基准特征属性算法的图书馆)来解决这一问题。与真实世界数据集不同,合成数据集使得能够有效地计算评估地面图沙普利值和其他衡量标准所需的有条件预期值。我们发布的合成数据集提供了广泛的参数,可以配置用于模拟真实世界数据。我们通过对各种评价指标和各种环境的公众解释技术进行基准评估,展示了图书馆的力量。我们的图书馆的多面性和效率将帮助研究人员将其解释方法从开发/Squably/Squstain-chis部署。我们的代码可用。