In many machine learning settings there is an inherent tension between fairness and accuracy desiderata. How should one proceed in light of such trade-offs? In this work we introduce and study $\gamma$-disqualification, a new framework for reasoning about fairness-accuracy tradeoffs w.r.t a benchmark class $H$ in the context of supervised learning. Our requirement stipulates that a classifier should be disqualified if it is possible to improve its fairness by switching to another classifier from $H$ without paying "too much" in accuracy. The notion of "too much" is quantified via a parameter $\gamma$ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task. Towards this objective, we establish principled translations between units of accuracy and units of (un)fairness for different accuracy measures. We show $\gamma$-disqualification can be used to easily compare different learning strategies in terms of how they trade-off fairness and accuracy, and we give an efficient reduction from the problem of finding the optimal classifier that satisfies our requirement to the problem of approximating the Pareto frontier of $H$.
翻译:在许多机器学习环境中, 公平性和准确性之间有着内在的矛盾。 在这种权衡中, 一个人应该如何行事? 在这项工作中, 我们引入并研究 $\gamma$- 取消资格, 这是在监督学习的背景下对公平性- 准确性交易进行推理的新框架 。 我们的要求规定, 如果有可能通过从另一个分类器转换到另一个分类器, 不支付“过多” 的准确性来提高其公平性, 分类器应该被取消资格 。 “ 过多” 的概念应该通过一个参数 $\ gamma$ 进行量化, 该参数是确定准确性和公平性之间可接受的权衡的一种工具, 其方式独立于用于量化某项任务中的公正和准确性的具体指标。 为了实现这一目标, 我们在准确性和( 不公平) 单位之间建立有原则的翻译, 以不同的精确性衡量尺度。 我们显示 $\gamma- 取消资格可以很容易地比较不同的学习战略, 如何进行交易的公平和准确性, 我们从找到最佳的分类器的问题中有效地减少了问题, 。