Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals of distributive justice, as formulated by political and legal philosophers. The perspectives of feminist political philosophers on social justice, by contrast, have been largely neglected. Some feminist philosophers have criticized the paradigm of distributive justice and have proposed corrective amendments to surmount its limitations. The present paper brings some key insights of feminist political philosophy to algorithmic fairness. The paper has three goals. First, I show that algorithmic fairness does not accommodate structural injustices in its current scope. Second, I defend the relevance of structural injustices -- as pioneered in the contemporary philosophical literature by Iris Marion Young -- to algorithmic fairness. Third, I take some steps in developing the paradigm of 'responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness.
翻译:以数据驱动的预测算法被广泛用于自动化和指导高层决策,例如保释和假释建议、医疗资源分配和抵押分配。然而,据报告出现了对弱势群体有偏见的有害结果。越来越多的研究领域,称为“农业公平”,旨在减少这些有害偏见。其主要方法包括提出数学计量法,以解决由算法偏向产出造成的社会伤害。衡量法通常以政治哲学家和法律哲学家提出的分配公正理想为动因,或实质上植根于这种理想。女权主义政治哲学家关于社会正义的观点,与此形成对照的是,这些观点在很大程度上被忽视。一些女权主义哲学家批评了分配性正义的范式,并提出了纠正性修正,以克服其局限性。本文将女权主义政治哲学的一些关键见解带入了算法公正。该文件有三个目标。首先,我表明,算法公平不适应目前范围内的结构不公正。第二,我维护结构性不公正的相关性,正如Iris Marimen Young在当代哲学文献中率先提出的那样,即与算法公正有关。第三,我为当前公平性范式的公平性发展某些步骤,以便纠正。