The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness metric(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead re-cast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.
翻译:在设计监督的机器学习方法时,基于机器学习的决定的公平性已成为越来越重要的重点。多数公平性办法优化了绩效衡量(如准确性、日志损失或非统一会计制)与公平度量(如人口均等、均等率)之间的特定权衡。这就引出了这样一个问题:是否规定了正确的绩效公平性权衡?我们通过引入超人公平性,将公平性机器学习作为仿照学习任务,试图同时在多重预测性业绩和公平性措施方面超越人性决定。我们展示了这一方法的好处,因为决策不尽如人意。