Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains, which have developed policies to guide how to balance the two in conditions of uncertainty. While computer science also commonly studies accuracy-efficiency trade-offs, their policy implications remain poorly examined. Drawing on risk assessment practices in the US, we argue that, since examining these trade-offs has been useful for guiding governance in other domains, we need to similarly reckon with these trade-offs in governing computer systems. We focus our analysis on distributed machine learning systems. Understanding the policy implications in this area is particularly urgent because such systems, which include autonomous vehicles, tend to be high-stakes and safety-critical. We 1) describe how the trade-off takes shape for these systems, 2) highlight gaps between existing US risk assessment standards and what these systems require to be properly assessed, and 3) make specific calls to action to facilitate accountability when hypothetical risks concerning the accuracy-efficiency trade-off become realized as accidents in the real world. We close by discussing how such accountability mechanisms encourage more just, transparent governance aligned with public values.
翻译:在法律、公共卫生和其他非统计领域,准确与效率之间的取舍遍布法律、公共卫生和其他非统计领域,它们制定了指导如何在不确定条件下平衡两者的政策。虽然计算机科学也共同研究精确与效率的取舍,但其政策影响仍然没有得到很好的审查。根据美国的风险评估做法,我们认为,由于对这些取舍的审查对于指导其他领域的治理是有用的,我们需要同样地考虑计算机系统管理方面的这些取舍。我们的分析侧重于分布式的机器学习系统。了解这一领域的政策影响特别紧迫,因为包括自主车辆在内的这类系统往往具有高度的取舍和安全批评作用。我们1)描述了这些系统的取舍方式如何形成,2)强调了现有美国风险评估标准与这些系统需要适当评估的系统之间的差距,3)具体呼吁采取行动,以便在有关准确-效率取舍的假设风险在现实世界中作为事故而实现时,促进问责制。我们最后讨论了这种问责机制如何鼓励更公正、更透明的治理,使之符合公共价值观。