The diffusion of artificial intelligence (AI) applications in organizations and society has fueled research on explaining AI decisions. The explainable AI (xAI) field is rapidly expanding with numerous ways of extracting information and visualizing the output of AI technologies (e.g. deep neural networks). Yet, we have a limited understanding of how xAI research addresses the need for explainable AI. We conduct a systematic review of xAI literature on the topic and identify four thematic debates central to how xAI addresses the black-box problem. Based on this critical analysis of the xAI scholarship we synthesize the findings into a future research agenda to further the xAI body of knowledge.
翻译:在组织和社会中传播人工智能应用促进了解释大赦国际决定的研究; 解释的AI(AI)领域正在迅速扩大,以多种方式提取信息和直观地展示AI技术的产出(例如深神经网络); 然而,我们对xAI研究如何满足解释性AI的需要了解有限; 我们系统审查了xAI关于这一专题的文献,并确定了对xAI如何解决黑盒问题至关重要的四个专题辩论; 根据对xAI奖学金的这一批判性分析,我们将研究结果综合纳入未来的研究议程,以推进xAI的知识体系。