The shift from symbolic AI systems to black-box, sub-symbolic, and statistical ones has motivated a rapid increase in the interest toward explainable AI (XAI), i.e. approaches to make black-box AI systems explainable to human decision makers with the aim of making these systems more acceptable and more usable tools and supports. However, we make the point that, rather than always making black boxes transparent, these approaches are at risk of \emph{painting the black boxes white}, thus failing to provide a level of transparency that would increase the system's usability and comprehensibility; or, even, at risk of generating new errors, in what we termed the \emph{white-box paradox}. To address these usability-related issues, in this work we focus on the cognitive dimension of users' perception of explanations and XAI systems. To this aim, we designed and conducted a questionnaire-based experiment by which we involved 44 cardiology residents and specialists in an AI-supported ECG reading task. In doing so, we investigated different research questions concerning the relationship between users' characteristics (e.g. expertise) and their perception of AI and XAI systems, including their trust, the perceived explanations' quality and their tendency to defer the decision process to automation (i.e. technology dominance), as well as the mutual relationships among these different dimensions. Our findings provide a contribution to the evaluation of AI-based support systems from a Human-AI interaction-oriented perspective and lay the ground for further investigation of XAI and its effects on decision making and user experience.
翻译:我们指出,从象征性的AI系统向黑盒子、亚血清和统计系统的转变,促使人们迅速增加对可解释的AI(XAI)的兴趣,即使黑盒子AI系统能够向人类决策者解释的方法,目的是使这些系统更容易被接受,更便于使用的工具和支持。然而,我们指出,这些方法并非总是使黑盒子透明,而是有使黑盒子透明的危险,因此无法提供一定程度的透明度,提高该系统的可用性和可理解性;或者甚至有可能造成新的错误,即我们称之为\emph{white-box悖论的方法。为了解决这些与可用性有关的问题,我们在这项工作中侧重于用户对解释和XAI系统的认知层面。 我们设计并进行了基于问卷的实验,让44名基于心脏病的居民和专家参与AI支持的ECG阅读任务。 我们这样做是为了调查关于用户特性(例如,电子-白盒-Box-box)之间的关系的不同研究问题,以及他们作为相互信任的系统,包括相互信任的透明性分析,以及他们作为相互信任的系统。