As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.
翻译:随着机器学习(ML)系统在更为关键的领域被采纳,解决这些系统中可能出现的偏差已变得越来越重要。在模型培训期间,可以采用几种公平的预处理算法来减轻隐含的偏差。这些算法采用不同的公平概念,往往导致相互冲突的战略,从而在公平和准确性之间产生相应的权衡。在这项工作中,我们评估了三种流行的公平预处理算法,并调查了将所有算法合并成一个更健全的预处理共同体的可能性。我们报告了一些经验教训,这些经验教训可以帮助从业者更好地选择其模型的公平算法。