In this paper, we argue for a paradigm shift from the current model of explainable artificial intelligence (XAI), which may be counter-productive to better human decision making. In early decision support systems, we assumed that we could give people recommendations and that they would consider them, and then follow them when required. However, research found that people often ignore recommendations because they do not trust them; or perhaps even worse, people follow them blindly, even when the recommendations are wrong. Explainable artificial intelligence mitigates this by helping people to understand how and why models give certain recommendations. However, recent research shows that people do not always engage with explainability tools enough to help improve decision making. The assumption that people will engage with recommendations and explanations has proven to be unfounded. We argue this is because we have failed to account for two things. First, recommendations (and their explanations) take control from human decision makers, limiting their agency. Second, giving recommendations and explanations does not align with the cognitive processes employed by people making decisions. This position paper proposes a new conceptual framework called Evaluative AI for explainable decision support. This is a machine-in-the-loop paradigm in which decision support tools provide evidence for and against decisions made by people, rather than provide recommendations to accept or reject. We argue that this mitigates issues of over- and under-reliance on decision support tools, and better leverages human expertise in decision making.
翻译:在本文中,我们主张从目前的可解释的人工智能模式(XAI)进行范式转变,这种模式可能适得其反,转而改善人类决策。在早期的决策支持系统中,我们假设我们可以向人们提供建议,他们会考虑这些建议,然后在必要时遵循这些建议。然而,研究发现,人们往往忽视建议,因为他们不信任这些建议;或者甚至更糟的是,人们盲目地遵循这些建议,即使建议是错误的。可以解释的人工智能通过帮助人们理解模式如何和为什么提出某些建议来缓解这一点。然而,最近的研究表明,人们并不总是使用解释性工具来帮助改进决策。人们参与建议和解释的假设被证明是没有根据的。我们之所以这样做是因为我们没有说明两件事。首先,建议(及其解释)来自人类决策者的控制权,限制了他们的能力。第二,提供建议和解释与人们决策所使用的认知过程不相符。本立场文件提出了一个新的概念框架,称为评估性AI,用于解释决定支持。这是一个机器化的范式模式,在其中,我们不支持决策工具之下提供证据和决定,我们拒绝决策工具。</s>