Explainable AI (XAI) research has been booming, but the question "$\textbf{To whom}$ are we making AI explainable?" is yet to gain sufficient attention. Not much of XAI is comprehensible to non-AI experts, who nonetheless, are the primary audience and major stakeholders of deployed AI systems in practice. The gap is glaring: what is considered "explained" to AI-experts versus non-experts are very different in practical scenarios. Hence, this gap produced two distinct cultures of expectations, goals, and forms of XAI in real-life AI deployments. We advocate that it is critical to develop XAI methods for non-technical audiences. We then present a real-life case study, where AI experts provided non-technical explanations of AI decisions to non-technical stakeholders, and completed a successful deployment in a highly regulated industry. We then synthesize lessons learned from the case, and share a list of suggestions for AI experts to consider when explaining AI decisions to non-technical stakeholders.
翻译:可解释的AI(XAI)研究一直在蓬勃发展,但“我们向谁解释AAI”的问题还没有得到足够的注意。非AI专家无法理解XAI的很多内容,尽管这些专家实际上是已部署的AI系统的主要受众和主要利益攸关方。这一差距十分明显:在实际情景中,对AI专家和非专家的“解释”非常不同。因此,这一差距产生了两种不同的期望、目标和形式的XAI在实际的AI部署中的不同文化。我们主张,为非技术受众制定XAI方法至关重要。然后,我们提出一个真实的案例研究,由AI专家向非技术利益攸关方提供对AI决定的非技术性解释,并成功地在高度监管的行业进行部署。我们随后总结了从这个案例中吸取的经验教训,并向AI专家提出了在向非技术利益攸关方解释AI决定时加以考虑的建议清单。