In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract.
翻译:近些年来,机器学习(ML)的公平已成为一个非常活跃的研究和发展领域。多数人以简单的措辞界定了公平性。公平性意味着减少人口群体之间在业绩或结果方面的差距,同时尽可能保持原有制度的准确性。这种通过公平措施过度简化平等的做法令人不安。许多目前的公平性措施既受到公平性和业绩退化的影响,也受到“降级”的影响,因为通过使每个群体变得更坏,或者通过将表现较好的群体降低到最差的水平,实现公平性,从而实现公平性。当只有通过污名伤害、失去团结、不平等关切和错失实质性平等的机会,使每个人在物质或关系上都更加糟糕,才能实现公平性。在将“公平性”这一模糊的概念转化为实际做法时,似乎有些错误。本文审视了公平性下降的原因和普遍程度,并探讨了基于平等和不平等理论以及基于平等法律判例的可能理由和批评。我们发现,公平性ML目前不参与替代的衡量、报告或分析类型,以证明有理由在实际工作中将公平性讨论推向更低的公平性程度。我们建议,通过公平性限制的第一个步骤,“从公平性框架,从最低限度推向可接受的公平性限制。”</s>