Machine learning actively impacts our everyday life in almost all endeavors and domains such as healthcare, finance, and energy. As our dependence on the machine learning increases, it is inevitable that these algorithms will be used to make decisions that will have a direct impact on the society spanning all resolutions from personal choices to world-wide policies. Hence, it is crucial to ensure that (un)intentional bias does not affect the machine learning algorithms especially when they are required to take decisions that may have unintended consequences. Algorithmic fairness techniques have found traction in the machine learning community and many methods and metrics have been proposed to ensure and evaluate fairness in algorithms and data collection. In this paper, we study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric. We demonstrate that such a classifier has an increased false positive rate across sensitive groups and propose a conceptually simple method to mitigate this bias. We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
翻译:机器学习在几乎所有的努力和领域,例如保健、金融和能源,都积极影响我们的日常生活。随着我们对机器学习的依赖程度的提高,这些算法必然被用来做出对从个人选择到世界范围政策等所有决议的社会产生直接影响的决定。因此,必须确保(无意的)偏见不会影响机器学习算法,特别是当需要它们作出可能产生意想不到后果的决定时。在机器学习社区中,演算公平技术发现了牵引力,并提出了许多方法和衡量标准,以确保和评估算法和数据收集的公平性。在本文中,我们研究在监督的学习环境中的算法公平性,并研究优化平等机会指标分类器的效果。我们证明,这种分类器在敏感群体中增加了虚假的积极率,并提出了一个概念上简单的方法来减少这种偏差。我们严格分析拟议的方法,并在几个真实的世界数据集中评估它的效果。