Algorithmic fairness in lending today relies on group fairness metrics for monitoring statistical parity across protected groups. This approach is vulnerable to subgroup discrimination by proxy, carrying significant risks of legal and reputational damage for lenders and blatantly unfair outcomes for borrowers. Practical challenges arise from the many possible combinations and subsets of protected groups. We motivate this problem against the backdrop of historical and residual racism in the United States polluting all available training data and raising public sensitivity to algorithimic bias. We review the current regulatory compliance protocols for fairness in lending and discuss their limitations relative to the contributions state-of-the-art fairness methods may afford. We propose a solution for addressing subgroup discrimination, while adhering to existing group fairness requirements, from recent developments in individual fairness methods and corresponding fair metric learning algorithms.
翻译:今天,贷款的公平性依赖群体公平性指标来监测受保护群体之间的统计均等情况。这种方法很容易因代理而遭到分组歧视,给贷款人带来巨大的法律和声誉损害风险,给借款人带来明显不公平的结果。实际挑战来自许多可能的组合和受保护群体子集。我们在美国历史和残余种族主义污染所有现有培训数据并引起公众对algorithimic偏见的敏感性的背景下推动这一问题。我们审查当前的监管合规协议,以公平借贷,并讨论其相对于最新公平性方法可能提供的捐助的局限性。我们提出了解决分组歧视的办法,同时遵守现有的群体公平性要求,包括个人公平方法和相应的公平度学习算法的最新发展。