With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.
翻译:随着人工智能(AAI)的日益进步以及金融技术(FinTech)的热度的不断提高,信用评分等应用程序在学术上获得了巨大的兴趣。信用评分有助于金融专家就是否接受贷款申请作出更好的决定,从而使违约可能性高的贷款不被接受。除了这类信用评分模式所面临的紧张和高度不平衡的数据挑战外,《一般数据保护条例》(GDPR)和《平等信贷机会法》(ECOA)最近推出的“解释权”等条例增加了对模型解释的必要性,以确保算法决定可以理解和一致。最近引入的一个有趣概念是易变换的AI(XAI),侧重于使黑箱模式更容易解释。在这项工作中,我们提出了一个信用评分模式,既准确又可以解释。关于家庭公平信用线(HELOC)和Lending Club(LC)的“最新业绩表现”等最近出台的条例增加了对数据评分的易易理解性,以确保算决定的可理解性和一致性。最近引入的一个有趣的AI(XOust)概念概念是侧重于使黑箱模型更加易理解。我们用不同的精确地展示了一种不同的标准。模型,然后通过不同的计算方式,通过不同的标准来进一步地展示,通过不同的评估,通过不同的评估,通过不同的方式提供了一种不同的标准,通过不同的标准,通过不同的评估,提供了一种不同的标准,从而进一步的精确性分析,提供了一种不同的精确的当地解释。