Trade-offs between accuracy and efficiency are found in multiple non-computing domains, such as law and public health, which have developed rules and heuristics to guide how to balance the two in conditions of uncertainty. While accuracy-efficiency trade-offs are also commonly acknowledged in some areas of computer science, their policy implications remain poorly examined. Drawing on risk assessment practices in the US, we argue that, since examining accuracy-efficiency trade-offs has been useful for guiding governance in other domains, explicitly framing such trade-offs in computing is similarly useful for the governance of computer systems. Our discussion focuses on real-time distributed ML systems; understanding the policy implications in this area is particularly urgent because such systems, which include autonomous vehicles, tend to be high-stakes and safety-critical. We describe how the trade-off takes shape for these systems, highlight gaps between existing US risk assessment standards and what these systems require in order to be properly assessed, and make specific calls to action to facilitate accountability when hypothetical risks become realized as accidents in the real world. We close by discussing how such accountability mechanisms encourage more just, transparent governance aligned with public values.
翻译:在多种非统计领域,例如法律和公共卫生领域,在准确性与效率之间的取舍是可行的,因为法律和公共卫生领域已经制定了规则,也制定了超自然理论来指导如何在不确定的条件下平衡两者。虽然在计算机科学的某些领域,准确性与效率的取舍也得到普遍承认,但其政策影响仍然没有得到很好的审查。根据美国的风险评估做法,我们认为,由于审查准确性与效率的取舍对于指导其他领域的治理是有用的,因此在计算中明确制定这种取舍对计算机系统的治理同样有用。我们的讨论重点是实时分布的 ML系统;了解这一领域的政策影响是特别紧迫的,因为这种系统,包括自主的车辆,往往具有高度的取用和安全的批评作用。我们描述了这些系统的取舍方式,突出现有美国风险评估标准与这些系统需要什么来进行适当评估之间的差距,并具体呼吁采取行动,以便在假设的风险在现实世界中作为事故而实现时促进问责制。我们最后讨论了这种问责机制如何鼓励更加公正、透明的治理与公共价值观相一致。