The growing philosophical literature on algorithmic fairness has examined statistical criteria such as equalized odds and calibration, causal and counterfactual approaches, and the role of structural and compounding injustices. Yet an important dimension has been overlooked: whether the evidential value of an algorithmic output itself depends on structural injustice. We contrast a predictive policing algorithm, which relies on historical crime data, with a camera-based system that records ongoing offenses, where both are designed to guide police deployment. In evaluating the moral acceptability of acting on a piece of evidence, we must ask not only whether the evidence is probative in the actual world, but also whether it would remain probative in nearby worlds without the relevant injustices. The predictive policing algorithm fails this test, but the camera-based system passes it. When evidence fails the test, it is morally problematic to use it punitively, more so than evidence that passes the test.
翻译:关于算法公平性的哲学文献日益增多,已探讨了诸如均等化几率与校准等统计标准、因果与反事实方法,以及结构性不公与复合不公的作用。然而一个重要维度被忽视了:算法输出本身的证据价值是否依赖于结构性不公。我们将依赖历史犯罪数据的预测性警务算法与记录实时犯罪行为的摄像头系统进行对比,二者皆旨在指导警力部署。在评估依据某项证据采取行动的道德可接受性时,我们不仅需要追问该证据在现实世界中是否具有证明力,还需考察其在没有相关不公的邻近可能世界中是否仍保持证明力。预测性警务算法未能通过此项检验,而摄像头系统则通过了检验。当证据未能通过检验时,将其用于惩罚性目的将产生道德问题,其严重程度远超通过检验的证据。