We dissect an experimental credit scoring model developed with real data and demonstrate - without access to protected attributes - how the use of location information introduces racial bias. We analyze the tree gradient boosting model with the aid of a game-theoretic inspired machine learning explainability technique, counterfactual experiments and Brazilian census data. By exposing algorithmic racial bias explaining the trained machine learning model inner mechanisms, this experiment comprises an interesting artifact to aid the endeavor of theoretical understanding of the emergence of racial bias in machine learning systems. Without access to individuals' racial categories, we show how classification parity measures using geographically defined groups could carry information about model racial bias. The experiment testifies to the need for methods and language that do not presuppose access to protected attributes when auditing ML models, the importance of considering regional specifics when addressing racial issues, and the central role of census data in the AI research community. To the best of our knowledge, this is the first documented case of algorithmic racial bias in ML-based credit scoring in Brazil, the country with the second largest Black population in the world.
翻译:我们用真实数据解剖一个实验性信用评分模式,该模式是用实际数据开发的,并展示了使用定位信息如何引入种族偏见,我们借助于一种博学理论启发机器学习解释技术、反事实实验和巴西人口普查数据,分析了树梯推升模式。通过揭露解释经过培训的机器学习模型内在机制的算法种族偏见,这一实验包括一件有趣的艺术品,以帮助人们从理论上理解机器学习系统中出现的种族偏见。在没有个人种族类别的情况下,我们展示了使用地理界定的群体进行分类的等同措施如何传递种族偏见模型的信息。实验证明,在审计ML模型时,需要使用不以获得受保护属性为前提的方法和语言,在解决种族问题时,必须考虑到区域具体情况,以及普查数据在AI研究界的中心作用。据我们所知,这是巴西这个拥有世界第二大黑人人口的国家,在ML信用中首次记录了算法种族偏见的案例。