Explainability is highly-desired in Machine Learning (ML) systems supporting high-stakes policy decisions in areas such as health, criminal justice, education, and employment. While the field of explainable ML has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods are designed with \textit{generic} explainability goals without well-defined use-cases or intended end-users and evaluated on simplified tasks, benchmark problems/datasets, or with proxy users (e.g., AMT). We argue that these simplified evaluation settings do not capture the nuances and complexities of real-world applications. As a result, the applicability and effectiveness of this large body of theoretical and methodological work in real-world applications are unclear. In this work, we take steps toward addressing this gap for the domain of public policy. First, we identify the primary use-cases of explainable ML within public policy problems. For each use case, we define the end-users of explanations and the specific goals the explanations have to fulfill. Finally, we map existing work in explainable ML to these use-cases, identify gaps in established capabilities, and propose research directions to fill those gaps to have a practical societal impact through ML. The contribution is 1) a methodology for explainable ML researchers to identify use cases and develop methods targeted at them and 2) using that methodology for the domain of public policy and giving an example for the researchers on developing explainable ML methods that result in real-world impact.
翻译:支持保健、刑事司法、教育和就业等领域高度决策的机器学习系统非常希望解释性。虽然近年来可解释性ML领域有所扩大,但许多这项工作没有考虑到现实世界的需要。大多数拟议方法的设计都采用了“textit{generic}”解释性目标,没有明确界定的使用案例或预期最终用户,也没有对简化任务、基准问题/数据集或代理用户(例如AMT)进行评价。我们认为,这些简化的评估环境没有反映真实世界应用的细微和复杂性。最后,我们绘制了实际世界应用中这一庞大的理论和方法工作的可适用性和有效性不明确。在这项工作中,我们采取步骤解决公共政策领域的这一差距。首先,我们确定了公共政策问题中可解释性ML的主要使用案例,对简化任务、基准问题/数据集的最终用户和解释必须达到的具体目标。最后,我们用“ML”来说明现有工作,用“ML”来解释实际政策方法,通过“ML”来说明这些研究方法,通过“ML”来解释这些差距和解释这些差距和解释性目标。我们用“ML”来说明现有工作,用“ML”来解释这些方法,通过“ML”来解释性研究方法,通过“ML”来解释这些差距和“ML”来解释性案例。我们用实例,为“ML”来说明这些差距和“ML”来说明这些差距和“M”解释性分析方法。