As Artificial Intelligence (AI) systems are integrated into more aspects of society, they offer new capabilities but also cause a range of harms that are drawing increasing scrutiny. A large body of work in the Responsible AI community has focused on identifying and auditing these harms. However, much less is understood about what happens after harm occurs: what constitutes reparation, who initiates it, and how effective these reparations are. In this paper, we develop a taxonomy of AI harm reparation based on a thematic analysis of real-world incidents. The taxonomy organizes reparative actions into four overarching goals: acknowledging harm, attributing responsibility, providing remedies, and enabling systemic change. We apply this framework to a dataset of 1,060 AI-related incidents, analyzing the prevalence of each action and the distribution of stakeholder involvement. Our findings show that reparation efforts are concentrated in early, symbolic stages, with limited actions toward accountability or structural reform. Drawing on theories of justice, we argue that existing responses fall short of delivering meaningful redress. This work contributes a foundation for advancing more accountable and reparative approaches to Responsible AI.
翻译:随着人工智能系统日益融入社会各领域,它们在提供新能力的同时也引发了一系列日益受到关注的危害。负责任人工智能研究领域的大量工作聚焦于识别和审计这些危害。然而,对于危害发生后应采取何种措施——何为修复、由谁发起、修复效果如何——学界认知尚浅。本文基于对现实案例的主题分析,构建了人工智能危害修复的分类体系。该体系将修复行动归纳为四大核心目标:承认伤害、归责问责、提供补救、推动系统变革。我们将此框架应用于包含1060起人工智能相关事件的数据集,分析了各类行动的普遍性及利益相关方参与分布。研究发现,当前修复努力多集中于早期象征性阶段,在问责机制与结构性改革方面行动有限。借鉴正义理论,我们论证现有应对措施未能实现实质性救济。本研究为推动更具问责性与修复性的负责任人工智能方法奠定了理论基础。