As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic public review and approval before the agency can adopt the algorithm.
翻译:随着算法成为世界各地政府决策中具有影响力的组成部分,决策者们辩论了政府如何在防止算法伤害的同时获得算法的好处。 成为全球努力规范政府算法的核心机制之一是要求人类对算法决定进行监督。 尽管人们普遍转向人类监督,但这些政策所依据的假设是:人们能够有效监督算法决策;在本条中,我调查了41项政策,这些政策规定对政府算法进行人监督,并发现它们存在两个重大缺陷。首先,证据表明,人们无法履行预期的监督职能。第二,由于第一个缺陷,人类监督政策使政府使用错误和有争议的算法合法化,而没有用这些工具解决根本问题。因此,人类监督政策不是要保护政府进行算法决策的潜在危害,而是要提供一种虚假的安全感,在采用算法时,使供应商和机构能够对算法损害负责。鉴于这些缺陷,我建议从人监督转向机构监督,作为管理政府算法的中央机制。第二,由于存在缺陷,因此,人类监督政策必须在两个阶段进行机构性审查,这种机构审查必须先采用一种制度审查,然后才采用一种制度审查,这种审查必须采用一种适当的方式。