To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples stakeholders' knowledge from their interpretability needs. We characterize stakeholders by their formal, instrumental, and personal knowledge and how it manifests in the contexts of machine learning, the data domain, and the general milieu. We additionally distill a hierarchical typology of stakeholder needs that distinguishes higher-level domain goals from lower-level interpretability tasks. In assessing the descriptive, evaluative, and generative powers of our framework, we find our more nuanced treatment of stakeholders reveals gaps and opportunities in the interpretability literature, adds precision to the design and comparison of user studies, and facilitates a more reflexive approach to conducting this research.
翻译:为确保问责制和减轻损害,不同的利益攸关方必须能够询问黑箱自动系统,并找到可以理解、相关和有用的信息。在本文中,我们避免了先前对可解释利益攸关方的专业知识和作用分类,而倾向于采用更为细微的框架,将利益攸关方的知识与其可解释的需要区分开来。我们用利益攸关方的正式、工具、个人知识及其在机器学习、数据领域和一般环境方面的表现方式来描述利益攸关方的特点。我们进一步从利益攸关方的需求中分级划分出一个层次,将更高层次的域目标和较低层次的可解释性任务区分开来。在评估我们框架的描述性、评价性和基因化能力时,我们发现我们更细致地对待利益攸关方的做法揭示了可解释性文献中的差距和机会,增加了用户研究的设计与比较的精确度,并为开展这一研究提供了更具反省性的方法。