This two part paper argues that seemingly "technical" choices made by developers of machine-learning based algorithmic tools used to inform decisions by criminal justice authorities can create serious constitutional dangers, enhancing the likelihood of abuse of decision-making power and the scope and magnitude of injustice. Drawing on three algorithmic tools in use, or recently used, to assess the "risk" posed by individuals to inform how they should be treated by criminal justice authorities, we integrate insights from data science and public law scholarship to show how public law principles and more specific legal duties that are rooted in these principles, are routinely overlooked in algorithmic tool-building and implementation. We argue that technical developers must collaborate closely with public law experts to ensure that if algorithmic decision-support tools are to inform criminal justice decisions, those tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine, including respect for human rights, throughout the tool-building process.
翻译:第二部分文件认为,以机器学习为基础的算法工具的开发者在为刑事司法当局的决定提供信息时所作的看似“技术”的选择可能会造成严重的宪法危险,增加滥用决策权的可能性以及不公正的范围和程度。利用目前使用或最近使用的三种算法工具评估个人提出的“风险”以告知刑事司法当局应如何对待他们。 我们综合了数据科学和公法奖学金的见解,以表明在算法工具的建立和执行中,公共法律原则和基于这些原则的更具体的法律责任如何经常被忽视。我们主张,技术开发者必须与公法专家密切合作,以确保如果算法决策支持工具要为刑事司法决定提供信息,那么这些工具的配置和实施方式显然符合公法原则和理论,包括在整个工具建设过程中尊重人权。