Explainable AI constitutes a fundamental step towards establishing fairness and addressing bias in algorithmic decision-making. Despite the large body of work on the topic, the benefit of solutions is mostly evaluated from a conceptual or theoretical point of view and the usefulness for real-world use cases remains uncertain. In this work, we aim to state clear user-centric desiderata for explainable AI reflecting common explainability needs experienced in statistical production systems of the European Central Bank. We link the desiderata to archetypical user roles and give examples of techniques and methods which can be used to address the user's needs. To this end, we provide two concrete use cases from the domain of statistical data production in central banks: the detection of outliers in the Centralised Securities Database and the data-driven identification of data quality checks for the Supervisory Banking data system.
翻译:尽管就这一专题开展了大量工作,但解决办法的好处大多是从概念或理论角度加以评估,对现实世界使用案例的用处仍然不确定。在这项工作中,我们的目标是说明清楚的以用户为中心的偏差,以说明可解释的AI,反映欧洲中央银行统计生产系统所经历的共同解释需要。我们把偏差与陈旧的用户角色联系起来,并举例说明可用于满足用户需要的技术和方法。为此,我们提供了中央银行统计数据制作领域的两个具体使用案例:发现中央证券数据库的外部单位,以及以数据驱动的方式确定监督银行数据系统的数据质量检查。