Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap--divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
翻译:可解释的AI(XAI)系统具有社会技术性质;因此,它们受制于技术负担与社会需要之间的社会技术差距分化;然而,绘制这一差距图具有挑战性;在XAI方面,我们争辩说,绘制差距图可以提高我们的问题理解度,从而以反射方式提供可操作的洞察力来改进解释性;利用不同领域的两个案例研究,我们从经验中得出一个框架,通过将AI准则与XAI准则联系起来,并阐明如何利用这些准则弥补差距,从而便利系统地绘制社会技术差距图;我们将该框架应用于一个新领域的第三个案例,展示其适用性;最后,我们讨论了框架的概念影响,分享其实施过程中的实际考虑,并就将其转移到新的环境提供指导;通过在概念和实践上帮助理解XAI的社会技术差距,该框架扩大了XAI的设计空间。