XAI research often focuses on settings where people learn about and assess algorithmic systems individually. However, as more public AI systems are deployed, it becomes essential for XAI to facilitate collective understanding and deliberation. We conducted a task-based interview study involving 8 focus groups and 12 individual interviews to explore how explanations can support AI novices in understanding and forming opinions about AI systems. Participants received a collection of explanations organized into four information categories to solve tasks and decide about a system's deployment. These explanations improved or calibrated participants' self-reported understanding and decision confidence and facilitated group discussions. Participants valued both technical and contextual information and the self-directed and modular explanation structure. Our contributions include an explanation approach that facilitates both individual and collaborative interaction and explanation design recommendations, including active and controllable exploration, different levels of information detail and breadth, and adaptations to the needs of decision subjects.
翻译:暂无翻译