Machine Learning models have been deployed across almost every aspect of society, often in situations that affect the social welfare of many individuals. Although these models offer streamlined solutions to large problems, they may contain biases and treat groups or individuals unfairly. To our knowledge, this review is one of the first to focus specifically on gender bias in applications of machine learning. We first introduce several examples of machine learning gender bias in practice. We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer. Specifically, we discuss the most influential bias mitigation algorithms as applied to domains in which models have a high propensity for gender discrimination. We group these algorithms into two overarching approaches -- removing bias from the data directly and removing bias from the model through training -- and we present representative examples of each. As society increasingly relies on artificial intelligence to help in decision-making, addressing gender biases present in these models is imperative. To provide readers with the tools to assess the fairness of machine learning models and mitigate the biases present in them, we discuss multiple open source packages for fairness in AI.
翻译:机器学习模式在社会几乎每个方面都得到了应用,往往是在影响许多个人社会福利的情况下。虽然这些模式为大型问题提供了简化的解决方案,但它们可能含有偏见,不公平地对待群体或个人。据我们所知,这一审查是第一个专门侧重于机器学习应用中的性别偏见的审评之一。我们首先介绍了机械学习性别偏见的实例。我们随后详细介绍了最广泛使用的公平化正规化,以便解决如何使机器学习模式更为公平的问题。具体地说,我们讨论了适用于模型具有高度性别歧视倾向的领域的最有影响力的减少偏见算法。我们将这些算法分为两个总体办法 -- -- 直接从数据中消除偏见,通过培训消除模式中的偏见 -- -- 我们提出每个例子的代表性。随着社会日益依赖人工智能来帮助决策,解决这些模式中存在的性别偏见是绝对必要的。为读者提供工具以评估机器学习模式的公平性和减少其中存在的偏见,我们讨论多种开放源包以促进AI的公平性。