This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions into machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
翻译:本调查条款评估并比较了目前对机器学习(ML)中加强公平性的技术干预的现有批评,这些批评来自一系列非计算学科,包括哲学、女权研究、关键的种族和族裔研究、法律研究、人类学和科学技术研究,它弥合了认知分歧,以便从跨学科角度理解对ML公平性采取霸权主义的计算方法的可能性和限制,为社会最边缘化者创造公正结果;文章是根据批评的九大主题编排的,其中这些不同领域相互交织:(1) 如何在AI公平研究中界定“公平性”的定义;(2) 如何提出AI系统要处理的问题;(3) 抽象主义对AI工具如何运作及其导致技术解决方案的倾向的影响;(4) 种族分类如何在AI公平研究中发挥作用;(5) 使用AI公平性措施避免监管和进行道德洗礼;(6) 在AI公平考虑中缺乏参与性的设计和民主审议;(7) 数据收集做法,其中强化“偏见”是非一致的,缺乏透明度;(8) 将边缘化群体的掠夺性纳入AI系统及其结构的走向;(9) 利用AI系统的积极性结论: