With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.
翻译:随着在日常生活中广泛使用AI系统和应用,在设计和设计这些类型的系统时必须考虑到公平问题,这种系统可用于许多敏感环境,以便作出重要和改变生活的决定;因此,至关重要的是,确保这些决定不反映对某些群体或人口的歧视性行为;我们最近看到在机器学习、自然语言处理和深层次学习方面开展的工作,以应对不同次领域存在的此类挑战;随着这些系统的商业化,研究人员正在意识到这些应用可能包含并试图解决这些问题的偏见;在本次调查中,我们调查了以各种方式显示偏见的不同真实世界应用程序,并列出了可能影响AI应用的不同偏见来源。我们随后建立了机器学习研究人员为避免目前AI系统中存在的偏见而定义的公平定义分类系统。此外,我们还研究了不同领域和深层次的学习,展示了研究人员对最新方法中的不公平结果的看法,以及他们如何试图解决这些问题。我们还调查了许多未来的方向和解决办法,这些方向和解决办法都能够影响AI应用。我们希望通过观察这些研究领域能够很快地解决现有AI系统中的偏见问题。