As machine learning models grow more complex and their applications become more high-stakes, tools for explaining model predictions have become increasingly important. Despite the widespread use of explainability techniques, evaluating and comparing different feature attribution methods remains challenging: evaluations ideally require human studies, and empirical evaluation metrics are often computationally prohibitive on real-world datasets. In this work, we address this issue by releasing XAI-Bench: a suite of synthetic datasets along with a library for benchmarking feature attribution algorithms. Unlike real-world datasets, synthetic datasets allow the efficient computation of conditional expected values that are needed to evaluate ground-truth Shapley values and other metrics. The synthetic datasets we release offer a wide variety of parameters that can be configured to simulate real-world data. We demonstrate the power of our library by benchmarking popular explainability techniques across several evaluation metrics and identifying failure modes for popular explainers. The efficiency of our library will help bring new explainability methods from development to deployment.
翻译:随着机器学习模式日益复杂,其应用也变得越来越复杂,解释模型预测的工具变得越来越重要。尽管广泛使用解释性技术,但评价和比较不同特征归属方法仍然具有挑战性:评价最理想地需要人类研究,经验性评价指标往往在实际世界数据集中难以计算。在这项工作中,我们通过释放XAI-Bench来解决这个问题:一套合成数据集,连同一套特征归属算法基准衡量图书馆。与现实世界数据集不同,合成数据集使得能够有效地计算评价地面真相沙普利值和其他指标所需的有条件预期值。我们公布的合成数据集提供了各种参数,可以用来模拟真实世界数据。我们通过将大众可解释性技术基准应用于若干评价指标和为民众解释者确定失败模式,展示了图书馆的力量。我们图书馆的效率将有助于从发展到应用新的解释性方法。