Legal literature on machine learning (ML) tends to focus on harms, and as a result tends to reason about individual model outcomes and summary error rates. This focus on model-level outcomes and errors has masked important aspects of ML that are rooted in its inherent non-determinism. We show that the effects of non-determinism, and consequently its implications for the law, instead become clearer from the perspective of reasoning about ML outputs as probability distributions over possible outcomes. This distributional viewpoint accounts for non-determinism by emphasizing the possible outcomes of ML. Importantly, this type of reasoning is not exclusive with current legal reasoning; it complements (and in fact can strengthen) analyses concerning individual, concrete outcomes for specific automated decisions. By clarifying the important role of non-determinism, we demonstrate that ML code falls outside of the cyberlaw frame of treating "code as law," as this frame assumes that code is deterministic. We conclude with a brief discussion of what work ML can do to constrain the potentially harm-inducing effects of non-determinism, and we clarify where the law must do work to bridge the gap between its current individual-outcome focus and the distributional approach that we recommend.
翻译:关于机器学习(ML)的法律文献往往侧重于伤害,因此往往会解释个别模型结果和简要错误率。这种对模型一级结果和错误的侧重掩盖了ML的重要方面,这些重要方面植根于其固有的非确定性。我们表明,非确定性的影响及其对法律的影响,相反,从将ML产出作为可能的结果的概率分布的推理的角度来看,就会更加清楚。这种分布性观点通过强调ML的可能结果来说明非确定性。 重要的是,这种推理并不排斥于当前的法律推理;它补充(事实上可以加强)关于具体自动决定的个人和具体成果的分析。通过澄清非确定性的重要作用,我们表明ML守则不属于将“守则为法律”视为确定性规则的网络法律框架的范围。我们最后简要地讨论了ML能够做些什么来限制非确定性理论的潜在伤害性影响,我们澄清了法律在哪些方面必须努力缩小目前个人重点之间的差别,我们建议如何缩小目前个人重点之间的分配。