Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. However, the functional quality of this regulatory approach has not been thoroughly interrogated. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, human oversight policies legitimize government use of flawed and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more rigorous approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, vendors or agencies must conduct preliminary evaluations of whether people can effectively oversee the algorithm.
翻译:世界各地的决策者正在越来越多地考虑如何防止政府使用算法造成不公正现象。成为全球努力规范政府算法的核心机制之一是要求人类对算法决定进行监督。然而,这一管理方法的功能质量还没有经过彻底调查。我调查了40项政策,其中规定了对政府算法的人力监督,发现它们存在两个重大缺陷。首先,证据表明人们无法履行预期的监督职能。第二,人监督政策使政府使用有缺陷和有争议的算法合法化,而没有解决这些工具的根本问题。因此,人监督政策没有保护政府进行算法决策的潜在危害,而是在采用算法时提供虚假的安全感,使供应商和机构能够对算法损害进行严格问责。鉴于这些缺陷,我建议采取更严格的方法,以确定是否以及如何将算法纳入政府决策。第一,决策者必须严格考虑在特定情况下使用算法是否适当。第二,在与人监督一起部署算法之前,供应商或机构必须先对是否能够有效监督算法进行初步评估。