We address the problem of algorithmic fairness: ensuring that sensitive variables do not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable. We derive both risk and fairness bounds that support the statistical consistency of our approach. We specify our approach to kernel methods and observe that the fairness requirement implies an orthogonality constraint which can be easily added to these methods. We further observe that for linear models the constraint translates into a simple data preprocessing step. Experiments indicate that the method is empirically effective and performs favorably against state-of-the-art approaches.
翻译:我们处理算法公平问题:确保敏感变量不会不公平地影响分类者的结果;我们提出基于经验风险最小化的方法,将公平性限制纳入学习问题;鼓励学习者有条件的分类风险与敏感变量大致一致;我们从风险和公平性两个方面得出支持我们方法在统计上一致性的界限;我们具体说明对内核方法的处理方法,并指出公平性要求意味着可以很容易地添加到这些方法中的矩形限制;我们还注意到,对于线性模型而言,这种限制转化为简单的数据预处理步骤。实验表明,该方法在经验上是有效的,而且对最先进的方法是有利的。