Explainability is a crucial requirement for effectiveness as well as the adoption of Machine Learning (ML) models supporting decisions in high-stakes public policy areas such as health, criminal justice, education, and employment, While the field of explainable has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods use benchmark datasets with generic explainability goals without clear use-cases or intended end-users. As a result, the applicability and effectiveness of this large body of theoretical and methodological work on real-world applications is unclear. This paper focuses on filling this void for the domain of public policy. We develop a taxonomy of explainability use-cases within public policy problems; for each use-case, we define the end-users of explanations and the specific goals explainability has to fulfill; third, we map existing work to these use-cases, identify gaps, and propose research directions to fill those gaps in order to have a practical societal impact through ML.
翻译:解释性是效力的关键要求,也是采用机器学习模式支持在保健、刑事司法、教育和就业等公共政策领域作出决定的关键要求。虽然近年来可解释性领域有所扩大,但这项工作中的大部分没有考虑到现实世界的需要。大多数拟议方法使用基准数据集,这些基准数据集具有通用解释性目标,而没有明确使用案例或预期最终用户。因此,关于现实世界应用的这一庞大的理论和方法工作的应用性和有效性尚不明确。本文件的重点是填补公共政策领域的这一空白。我们开发了公共政策问题中可解释性使用案例的分类学;我们为每一种用途界定解释的最终用户和可解释的具体目标必须实现;第三,我们为这些使用案例绘制现有工作图,找出差距,并提出研究方向,以填补这些差距,从而通过ML产生实际的社会影响。