Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to oversee algorithmic decision-making. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more stringent approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, agencies or vendors must conduct preliminary evaluations of whether people can effectively oversee the algorithm.
翻译:世界各地的决策者正在越来越多地考虑如何防止政府使用算法来造成不公正现象。一个机制已经成为全球管制政府算法努力的核心,它的核心就是要求人类对算法决定进行监督。尽管人们普遍转向人类监督,但这些政策所依据的假设是:人们能够监督算法决策;在本篇文章中,我调查了40项政策,这些政策规定对政府算法进行人监督,发现它们存在两个重大缺陷。首先,证据表明人们无法履行预期的监督职能。第二,由于第一个缺陷,人类监督政策使政府使用错误和有争议的算法合法化,而没有解决这些工具的基本问题。因此,人类监督政策不是保护政府进行算法决策的潜在危害,而是在采用算法时提供一种虚假的安全感,使供应商和机构能够对算法的伤害负责。鉴于这些缺陷,我建议采取更严格的方法来确定是否和如何将算法纳入政府决策。首先,决策者必须严格考虑是否适宜使用一种算法来规范政府使用政府使用错误和有争议的算法,而不是用这些工具解决根本问题。因此,人类监督政策在采用算法机构之前必须先有效监督,是否在具体情况下对人进行监督。