As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair. Among various algorithms for fairness AI, learning a prediction model by minimizing the empirical risk (e.g., cross-entropy) subject to a given fairness constraint has received much attention. To avoid computational difficulty, however, a given fairness constraint is replaced by a surrogate fairness constraint as the 0-1 loss is replaced by a convex surrogate loss for classification problems. In this paper, we investigate the validity of existing surrogate fairness constraints and propose a new surrogate fairness constraint called SLIDE, which is computationally feasible and asymptotically valid in the sense that the learned model satisfies the fairness constraint asymptotically and achieves a fast convergence rate. Numerical experiments confirm that the SLIDE works well for various benchmark datasets.
翻译:由于对社会决策具有重要影响,AI算法不仅应当准确,而且应当公平。在公平AI的各种算法中,通过尽量减少受某种公平制约的经验性风险(如交叉东热带)来学习一种预测模型受到相当的注意。然而,为了避免计算困难,一个特定的公平性限制被一种替代公平性限制所取代,因为0-1损失被一种对分类问题的共性代谢损失所取代。在本文件中,我们调查了现有的代用公平性限制的有效性,并提出了一个新的代用公平性限制,称为SLIDE, 这种方法在计算上是可行的,也是无懈可循的,因为所学的模式既能同时满足公平性限制,又能达到快速的趋同率。数字实验证实SLIDE对各种基准数据集效果良好。