As machine learning being used increasingly in making high-stakes decisions, an arising challenge is to avoid unfair AI systems that lead to discriminatory decisions for protected population. A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints, which achieves Pareto efficiency when trading off performance against fairness. Among various fairness metrics, the ones based on the area under the ROC curve (AUC) are emerging recently because they are threshold-agnostic and effective for unbalanced data. In this work, we formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints. This problem can be reformulated as a min-max optimization problem with min-max constraints, which we solve by stochastic first-order methods based on a new Bregman divergence designed for the special structure of the problem. We numerically demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
翻译:随着机器学习越来越多地用于作出高比率决定,一个新出现的挑战是避免不公平的人工智能系统导致对受保护人口作出歧视性决定。获得公平预测模式的直接办法是,在公平制约下优化预测性能,实现Pareto效率,从而在以公平方式交换业绩时实现效率。在各种公平指标中,基于ROC曲线(AUC)下区域的标准最近正在出现,因为它们是临界分数,对不平衡数据有效。在这项工作中,我们将公平意识机器学习模式的培训问题设计成ASUC最优化问题,但需受基于AUC的公平制约。这个问题可以重新表述为微轴限制的微峰优化问题,我们根据为问题的特殊结构设计的新的布雷格曼分级法,通过分级第一阶方法加以解决。我们用数字方式展示了我们根据不同公平指标对真实世界数据的方法的有效性。