Recent AI-related scandals have shed a spotlight on accountability in AI, with increasing public interest and concern. This paper draws on literature from public policy and governance to make two contributions. First, we propose an AI accountability ecosystem as a useful lens on the system, with different stakeholders requiring and contributing to specific accountability mechanisms. We argue that the present ecosystem is unbalanced, with a need for improved transparency via AI explainability and adequate documentation and process formalisation to support internal audit, leading up eventually to external accreditation processes. Second, we use a case study in the gambling sector to illustrate in a subset of the overall ecosystem the need for industry-specific accountability principles and processes. We define and evaluate critically the implementation of key accountability principles in the gambling industry, namely addressing algorithmic bias and model explainability, before concluding and discussing directions for future work based on our findings. Keywords: Accountability, Explainable AI, Algorithmic Bias, Regulation.
翻译:最近与大赦国际有关的丑闻凸显了大赦国际的问责制,公众的兴趣和关切日益增强。本文件借鉴了公共政策和治理的文献,作出了两项贡献。首先,我们提议将大赦国际的问责生态系统作为系统的有益透镜,由不同的利益攸关方共同参与,要求和推动具体的问责机制。我们认为,目前的生态系统是不平衡的,需要通过大赦国际来提高透明度,通过解释性以及适当的文件和进程正规化来支持内部审计,最终导致外部认证进程。第二,我们利用赌博部门的一项案例研究,在总体生态系统的一组中说明对行业特定问责原则和程序的需要。我们严格界定并评价赌博业关键问责原则的执行情况,即在根据我们的调查结果结束和讨论未来工作的方向之前,即处理逻辑偏差和示范解释。关键词:问责制、可解释性大赦国际、可解释性Bias、监管。