Algorithmic lending has transformed the consumer credit landscape, with complex machine learning models now commonly used to make or assist underwriting decisions. To comply with fair lending laws, these algorithms typically exclude legally protected characteristics, such as race and gender. Yet algorithmic underwriting can still inadvertently favor certain groups, prompting new questions about how to audit lending algorithms for potentially discriminatory behavior. Building on prior theoretical work, we introduce a profit-based measure of lending discrimination in loan pricing. Applying our approach to approximately 80,000 personal loans from a major U.S. fintech platform, we find that loans made to men and Black borrowers yielded lower profits than loans to other groups, indicating that men and Black applicants benefited from relatively favorable lending decisions. We trace these disparities to miscalibration in the platform's underwriting model, which underestimates credit risk for Black borrowers and overestimates risk for women. We show that one could correct this miscalibration -- and the corresponding lending disparities -- by explicitly including race and gender in underwriting models, illustrating a tension between competing notions of fairness.
翻译:算法化贷款已经改变了消费信贷格局,目前复杂的机器学习模型通常被用于制定或辅助承保决策。为了遵守公平贷款法律,这些算法通常会排除受法律保护的特征,例如种族和性别。然而,算法承保仍可能无意中偏袒某些群体,这引发了关于如何审计贷款算法是否存在潜在歧视行为的新问题。基于先前的理论研究,我们提出了一种基于利润的贷款定价歧视衡量方法。将我们的方法应用于来自美国一家主要金融科技平台的大约80,000笔个人贷款,我们发现向男性和黑人借款人发放的贷款产生的利润低于其他群体的贷款,这表明男性和黑人申请人从相对有利的贷款决策中受益。我们将这些差异追溯到该平台承保模型的校准错误,该模型低估了黑人借款人的信用风险,同时高估了女性的风险。我们证明,可以通过在承保模型中明确纳入种族和性别来纠正这种校准错误——以及相应的贷款差异,这说明了不同公平概念之间的紧张关系。