As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
翻译:由于大赦国际的动力系统越来越多地对随之而来的决策进行调解,它们的解释对于最终用户采取知情和负责任的行动至关重要。关于人类互动的解释是社会化的。关于人类互动的解释往往是社会性的。AI系统往往是社会-组织性的。然而,可解释的AI(XAI)方法主要以算法为主。我们从社会-技术角度出发,从社会-组织角度出发,将社会-组织背景纳入解释AI调解的决策。为了从概念上探讨ST,我们与29个大赦国际的用户和从业者进行了访谈,其基础是投机性的设计设想。我们建议了ST的组成设计要素,并制定了在技术、决策和组织层面解开ST效应和影响的概念框架。框架展示了ST如何能够调整对AI的信任、改进决策、便利组织集体行动和培养整体性解释性。我们的工作通过扩大XAI的设计空间,促进了以人类为中心的XAI的讨论。