Early studies of risk assessment algorithms used in criminal justice revealed widespread racial biases. In response, machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes. Here, I recall sociotechnical perspectives to delineate the significant gap between fairness in theory and practice, focusing on criminal justice. I (1) illustrate how social context can undermine analyses that are restricted to an AI system's outputs, and (2) argue that much of the fair ML literature fails to account for epistemological issues with underlying crime data. Instead of building AI that reifies power imbalances, like risk assessment algorithms, I ask whether data science can be used to understand the root causes of structural marginalization.
翻译:早期研究刑事司法中使用的风险评估算法揭示了广泛的种族偏见,作为回应,机器学习研究人员制定了公平性方法,其中许多方法依靠对各种受保护属性的同等经验衡量标准。在这里,我回顾社会技术观点,以界定理论和实践的公平性之间的巨大差距,重点是刑事司法。我(1) 说明社会背景如何会破坏限于AI系统产出的分析,(2) 认为公平 ML文献中的大部分内容没有考虑到基本犯罪数据中的认识性问题。我问,数据科学是否可以用来理解结构性边缘化的根源,而不是建立大赦国际,像风险评估算法那样,将权力不平衡重新化。我问,数据科学是否可以用来理解结构性边缘化的根源。