Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While "opening the opaque box" is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, we conduct a mixed-methods study of how two different groups of whos--people with and without a background in AI--perceive different types of AI explanations. These groups were chosen to look at how disparities in AI backgrounds can exacerbate the creator-consumer gap. We quantitatively share what the perceptions are along five dimensions: confidence, intelligence, understandability, second chance, and friendliness. Qualitatively, we highlight how the AI background influences each group's interpretations and elucidate why the differences might exist through the lenses of appropriation and cognitive heuristics. We find that (1) both groups had unwarranted faith in numbers, to different extents and for different reasons, (2) each group found explanatory values in different explanations that went beyond the usage we designed them for, and (3) each group had different requirements of what counts as humanlike explanations. Using our findings, we discuss potential negative consequences such as harmful manipulation of user trust and propose design interventions to mitigate them. By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in XAI, our work takes a formative step in advancing a pluralistic Human-centered Explainable AI discourse.
翻译:AI系统的解释对于用户采取知情行动和问责系统至关重要。 虽然“打开不透明的盒子”很重要, 但是理解谁打开这个盒子在人类- AI互动有效的情况下是可以治理的。 在本文中,我们用混合方法研究两种不同群体的人与哪些人之间是如何存在差异的。 我们发现:(1) 两个群体在数量、程度和不同原因方面缺乏合理信仰,(2) 每个群体发现解释价值的不同解释超出了我们设计的用途,(3) 每个群体对什么是人种解释的不同要求。 我们利用我们的调查结果,我们讨论潜在的负面后果,例如:对用户信任的有害操作,以及设计具有潜在意义的干预。