Much attention has focused on algorithmic audits and impact assessments to hold developers and users of algorithmic systems accountable. But existing algorithmic accountability policy approaches have neglected the lessons from non-algorithmic domains: notably, the importance of interventions that allow for the effective participation of third parties. Our paper synthesizes lessons from other fields on how to craft effective systems of external oversight for algorithmic deployments. First, we discuss the challenges of third party oversight in the current AI landscape. Second, we survey audit systems across domains - e.g., financial, environmental, and health regulation - and show that the institutional design of such audits are far from monolithic. Finally, we survey the evidence base around these design components and spell out the implications for algorithmic auditing. We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability, and sustained focus on institutional design will be required for meaningful third party involvement.
翻译:大量注意力集中在算法审计和影响评估上,以使算法系统的开发者和用户承担责任。但现有的算法问责政策办法忽视了非算法领域的经验教训:特别是干预对于第三方有效参与的重要性。我们的文件综合了其他领域关于如何为算法部署制定有效的外部监督制度的经验教训。首先,我们讨论了当前AI格局中第三方监督的挑战。第二,我们调查了各个领域的审计系统,例如金融、环境和卫生监管,并表明这类审计的体制设计远远不是单一的。最后,我们对这些设计组成部分的证据库进行了调查,并阐明了对算法审计的影响。我们的结论是,单靠审计不可能实现实际算法问责,而需要持续关注体制设计,才能使第三方有意义地参与。