Explaining the predictions of opaque machine learning algorithms is an important and challenging task, especially as complex models are increasingly used to assist in high-stakes decisions such as those arising in healthcare and finance. Most popular tools for post-hoc explainable artificial intelligence (XAI) are either insensitive to context (e.g., feature attributions) or difficult to summarize (e.g., counterfactuals). In this paper, I introduce \emph{rational Shapley values}, a novel XAI method that synthesizes and extends these seemingly incompatible approaches in a rigorous, flexible manner. I leverage tools from decision theory and causal modeling to formalize and implement a pragmatic approach that resolves a number of known challenges in XAI. By pairing the distribution of random variables with the appropriate reference class for a given explanation task, I illustrate through theory and experiments how user goals and knowledge can inform and constrain the solution set in an iterative fashion. The method compares favorably to state of the art XAI tools in a range of quantitative and qualitative comparisons.
翻译:解释对不透明的机器学习算法的预测是一项重要而富有挑战性的任务,特别是由于复杂的模型越来越多地被用来协助作出高发决策,例如保健和金融方面的决策,大多数热后可解释的人工智能(XAI)的流行工具要么对上下文(例如特征归属)不敏感,要么难以总结(例如反事实)。在本文中,我引入了一种新型的XAI方法,即以严格、灵活的方式合成和扩展这些似乎不兼容的方法。我利用决定理论和因果模型的工具来正式确定和实施解决XAI中一些已知挑战的务实方法。通过将随机变量的分配与特定解释任务的适当参考类别挂钩,我通过理论和实验来说明用户目标和知识如何以迭代方式为解决方案提供信息和限制。这种方法与在一系列定量和定性比较中描述Art XAI工具相比是有利的。