Binary decision making classifiers are not fair by default. Fairness requirements are an additional element to the decision making rationale, which is typically driven by maximizing some utility function. In that sense, algorithmic fairness can be formulated as a constrained optimization problem. This paper contributes to the discussion on how to implement fairness, focusing on the fairness concepts of positive predictive value (PPV) parity, false omission rate (FOR) parity, and sufficiency (which combines the former two). We show that group-specific threshold rules are optimal for PPV parity and FOR parity, similar to well-known results for other group fairness criteria. However, depending on the underlying population distributions and the utility function, we find that sometimes an upper-bound threshold rule for one group is optimal: utility maximization under PPV parity (or FOR parity) might thus lead to selecting the individuals with the smallest utility for one group, instead of selecting the most promising individuals. This result is counter-intuitive and in contrast to the analogous solutions for statistical parity and equality of opportunity. We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency. We show that more complex decision rules are required and that this leads to within-group unfairness for all but one of the groups. We illustrate our findings based on simulated and real data.
翻译:公平要求是决策理由的另一个要素,通常由尽量扩大某些公用事业功能驱动。从这个意义上讲,算法公平可以被视为一个有限的优化问题。本文件有助于讨论如何落实公平性,侧重于正面预测价值平等、虚假遗漏率和充足性的公平概念(这结合了前两个概念)。我们表明,特定集团的门槛规则是PPV均等和均等的最佳办法,类似于其他集团公平标准众所周知的结果。然而,根据基本人口分布和效用功能,我们发现,有时一个集团的上限门槛规则是最佳的:在PPV均等下实现效益最大化(或争取均等),从而可能导致选择对一个集团来说作用最小的个人,而不是选择最有希望的个人。这与统计均等和机会均等的类似解决办法是反直截了当的。我们还为最佳决策规则提供了一种解决办法,以满足公平限制的充足性要求。我们表明,更复杂的规则是最佳的,我们根据一个集团来模拟,我们所需要的是真实的。