Explainable Artificial Intelligence (XAI) techniques are frequently required by users in many AI systems with the goal of understanding complex models, their associated predictions, and gaining trust. While suitable for some specific tasks during development, their adoption by organisations to enhance trust in machine learning systems has unintended consequences. In this paper we discuss XAI's limitations in deployment and conclude that transparency alongside with rigorous validation are better suited to gaining trust in AI systems.
翻译:可解释性人工智能(Explainable Artificial Intelligence,XAI)技术常常被许多人工智能系统的用户所需,其目标是理解复杂模型及其相关预测,并建立信任。虽然在某些特定任务的开发过程中可能适用,但采用这种方法来增强机器学习系统的信任度会产生意想不到的后果。本文讨论了XAI在部署过程中的局限性,并得出结论,即透明度和严格的验证更适合获得人工智能系统的信任。