Human-AI decision making is becoming increasingly ubiquitous, and explanations have been proposed to facilitate better Human-AI interactions. Recent research has investigated the positive impact of explanations on decision subjects' fairness perceptions in algorithmic decision-making. Despite these advances, most studies have captured the effect of explanations in isolation, considering explanations as ends in themselves, and reducing them to technical solutions provided through XAI methodologies. In this vision paper, we argue that the effect of explanations on fairness perceptions should rather be captured in relation to decision subjects' right to contest such decisions. Since contestable AI systems are open to human intervention throughout their lifecycle, contestability requires explanations that go beyond outcomes and also capture the rationales that led to the development and deployment of the algorithmic system in the first place. We refer to such explanations as process-centric explanations. In this work, we introduce the notion of process-centric explanations and describe some of the main challenges and research opportunities for generating and evaluating such explanations.
翻译:人工智能决策越来越普遍,解释已被提出以促进更好的人工智能交互。最近的研究调查了解释对算法决策中决策对象公平感知的积极影响。尽管取得了这些进展,大多数研究仅以隔离的方式捕捉解释的作用,将解释视为技术解决方案,通过XAI方法提供。在这篇学术论文中,我们认为应该抓住解释对公平感知的影响,这种影响应该与决策对象的争议权有关。由于可争议的AI系统在其整个生命周期中开放给人类干预,因此可争议性要求的解释超越结果,还必须涵盖导致算法系统开发和部署的原理。我们将这样的解释称为面向过程的解释。在这项工作中,我们介绍了面向过程的解释的概念,并描述了产生和评估这些解释的一些主要挑战和研究机会。