Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternative data, are increasingly being used to determine access to credit, yet research has shown that machine learning can encode many different versions of "unfairness," thus raising the concern that banks and other financial institutions could -- potentially unwittingly -- engage in illegal discrimination through the use of this technology. In the US, there are laws in place to make sure discrimination does not happen in lending and agencies charged with enforcing them. However, conversations around fair credit models in computer science and in policy are often misaligned: fair machine learning research often lacks legal and practical considerations specific to existing fair lending policy, and regulators have yet to issue new guidance on how, if at all, credit risk models should be utilizing practices and techniques from the research community. This paper aims to better align these sides of the conversation. We describe the current state of credit discrimination regulation in the United States, contextualize results from fair ML research to identify the specific fairness concerns raised by the use of machine learning in lending, and discuss regulatory opportunities to address these concerns.
翻译:美国金融福利的一个基本组成部分是信贷,而不平等获得信贷的机会是当今人口群体之间经济差距的一大因素。 如今,机器学习算法,有时是接受替代数据培训的算法,正越来越多地被用于确定获得信贷的机会,然而研究显示,机器学习可以将许多不同版本的“不公平”编成法典,从而引起银行和其他金融机构可能无意中通过使用这种技术进行非法歧视的关切。在美国,有法律确保贷款和负责执行这种技术的机构不会发生歧视。然而,围绕计算机科学和政策方面的公平信贷模式进行的对话往往不吻合:公平机器学习研究往往缺乏现有公平贷款政策特有的法律和实际考虑,监管者尚未发布新的指导,说明信用风险模式如何(如果有的话)利用研究界的做法和技术进行非法歧视。本文旨在更好地协调这些方面的谈话。我们描述了美国目前的信用歧视管制状况,将公平多边贷款研究的结果纳入背景,以确定在贷款中使用机器学习所产生的具体公平关切,并讨论监管机会,以解决这些关切问题。