Algorithms, from simple automation to machine learning, have been introduced into judicial contexts to ostensibly increase the consistency and efficiency of legal decision making. In this paper, we describe four types of inconsistencies introduced by risk prediction algorithms. These inconsistencies threaten to violate the principle of treating similar cases similarly and often arise from the need to operationalize legal concepts and human behavior into specific measures that enable the building and evaluation of predictive algorithms. These inconsistencies, however, are likely to be hidden from their end-users: judges, parole officers, lawyers, and other decision-makers. We describe the inconsistencies, their sources, and propose various possible indicators and solutions. We also consider the issue of inconsistencies due to the use of algorithms in light of current trends towards more autonomous algorithms and less human-understandable behavioral big data. We conclude by discussing judges and lawyers' duties of technological ("algorithmic") competence and call for greater alignment between the evaluation of predictive algorithms and corresponding judicial goals.
翻译:从简单的自动化到机器学习,在司法背景中引入了从简单的自动化到机器学习的分类方法,表面上提高了法律决策的一致性和效率。在本文件中,我们描述了风险预测算法带来的四种不一致情况。这些不一致情况有可能违反类似案件处理原则,而且往往产生于将法律概念和人类行为落实到有助于建立和评估预测算法的具体措施中的必要性。但这些不一致情况可能隐藏在终端用户:法官、假释官员、律师和其他决策者的身上。我们描述了不一致情况、其来源,并提出各种可能的指标和解决办法。我们还根据目前走向更自主的算法和不易为人所理解的行为大数据的趋势来考虑由于使用算法而产生的不一致问题。我们最后通过讨论法官和律师在技术(“算法”)能力方面的责任,并呼吁在预测算法的评估与相应的司法目标之间更加一致。