Algorithmic bias mitigation has been one of the most difficult conundrums for the data science community and Machine Learning (ML) experts. Over several years, there have appeared enormous efforts in the field of fairness in ML. Despite the progress toward identifying biases and designing fair algorithms, translating them into the industry remains a major challenge. In this paper, we present the initial results of an industrial open innovation project in the banking sector: we propose a general roadmap for fairness in ML and the implementation of a toolkit called BeFair that helps to identify and mitigate bias. Results show that training a model without explicit constraints may lead to bias exacerbation in the predictions.
翻译:减轻分析偏差一直是数据科学界和机器学习专家最困难的难题之一,几年来,在数据科学界和机器学习专家的公平领域似乎作出了巨大努力。 尽管在发现偏差和设计公平算法方面取得了进展,但将其转化为产业仍然是一个重大挑战。 在本文中,我们介绍了银行部门工业开放创新项目的初步结果:我们提出了一个公平管理总路线图,并实施了名为“BeFair”的工具包,帮助识别和减少偏差。结果显示,培训一个没有明确限制的模型可能导致预测中的偏差加剧。