One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study aims to examine existing knowledge on bias and unfairness in Machine Learning models, identifying mitigation methods, fairness metrics, and supporting tools. A Systematic Literature Review found 40 eligible articles published between 2017 and 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases. The results show numerous bias and unfairness detection and mitigation approaches for ML technologies, with clearly defined metrics in the literature, and varied metrics can be highlighted. We recommend further research to define the techniques and metrics that should be employed in each case to standardize and ensure the impartiality of the machine learning model, thus, allowing the most appropriate metric to detect bias and unfairness in a given context.
翻译:人工智能的困难之一是确保示范决定是公平和不带偏见的。在研究、数据集、衡量标准、技术和工具应用中,发现和减轻算法上的不公平和偏差。这项研究旨在审查关于机器学习模型中的偏见和不公平的现有知识,确定缓解方法、公平度量和辅助工具。系统文学评论发现2017年至2022年期间在Scopus、IEEE Xplore、科学网和Google学者知识库中发表了40篇符合资格的文章。研究结果显示,对ML技术采取了许多偏见和不公平的检测和缓解方法,文献中有明确界定的计量标准,并可以突出各种计量标准。我们建议进一步研究,界定在每种情况下应当使用的技术和衡量标准,以实现机器学习模型的标准化和公正性,从而允许最适当的衡量标准在特定情况下发现偏差和不公平。