As the bias issue is being taken more and more seriously in widely applied machine learning systems, the decrease in accuracy in most cases deeply disturbs researchers when increasing fairness. To address this problem, we present a novel analysis of the expected fairness quality via weighted vote, suitable for both binary and multi-class classification. The analysis takes the correction of biased predictions by ensemble members into account and provides learning bounds that are amenable to efficient minimisation. We further propose a pruning method based on this analysis and the concepts of domination and Pareto optimality, which is able to increase fairness under a prerequisite of little or even no accuracy decline. The experimental results indicate that the proposed learning bounds are faithful and that the proposed pruning method can indeed increase ensemble fairness without much accuracy degradation.
翻译:由于广泛应用的机器学习系统越来越认真地对待偏见问题,多数情况下的准确性下降在提高公平性时使研究人员深感不安。为了解决这一问题,我们对通过加权投票的预期公平性进行了新的分析,这种分析既适合二等和多等分类。分析考虑到混合成员有偏向的预测,提供了易于有效最小化的学习界限。我们进一步根据这一分析以及统治和最佳化概念提出了一种修剪方法,这种方法能够在很少甚至没有准确性下降的先决条件下提高公平性。实验结果表明,拟议的学习界限是忠实的,拟议的裁剪方法确实可以提高共性公平性,而不会降低很多准确性。