As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, have different explanation needs. To address these needs, in 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics. This paper examines the impact of the toolkit with several case studies, statistics, and community feedback. The different ways in which users have experienced AI Explainability 360 have resulted in multiple types of impact and improvements in multiple metrics, highlighted by the adoption of the toolkit by the independent LF AI & Data Foundation. The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
翻译:随着人工智能和机器学习算法在社会上日益普遍,多个利益攸关方要求这些算法提供解释,同时这些利益攸关方,无论是受影响的公民、政府监管者、领域专家还是系统开发者,都有不同的解释需要。为了满足这些需要,2019年,我们创建了AI Explainable360(Arya等人,2020年),这是一个开放源软件工具包,包含10种不同和最先进的解释方法和两个评价指标。本文件通过若干案例研究、统计和社区反馈,审查了工具包的影响。用户经历AI Explication360的不同方式,产生了多种类型的影响和改进,这通过独立的LF AI & Data Foundation采用工具包而得到强调。文件还描述了工具包的灵活设计、使用实例以及用户可获得的重要教育材料和文件。