Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.
翻译:决策算法被用于重要的决策,例如,谁应该加入保健方案并被雇用。尽管这些系统目前是在高风险情况下部署的,但其中许多系统无法解释其决定。这一限制促使了可解释的人工智能(XAI)倡议,其目的是使算法能够解释以符合法律要求、促进信任和保持问责制。本文质疑解释性是否和在多大程度上有助于解决自主的AI系统所构成的责任问题。我们建议,提供后热解解释的XAI系统可以被视为可责备的代理,掩盖开发商在决策过程中的责任。此外,我们争辩说,由于人们错误地认为他们控制了可解释的算法,因此,XAI可能导致将责任错误地归于脆弱的利益攸关方,例如那些受制于算法决定的人(即病人)。如果设计师选择使用算法和病人作为道德和法律替罪羊,那么解释性和问责制之间的这种冲突可能会加剧。我们最后提出一套建议,建议如何避免在社会-技术决策中将这种紧张的制定过程中,从而避免将社会-技术责任推导出一个硬性的决策程序。