Explainable artificial intelligence (xAI) is seen as a solution to making AI systems less of a black box. It is essential to ensure transparency, fairness, and accountability, which are especially paramount in the financial sector. The aim of this study was a preliminary investigation of the perspectives of supervisory authorities and regulated entities regarding the application of xAI in the fi-nancial sector. Three use cases (consumer credit, credit risk, and anti-money laundering) were examined using semi-structured interviews at three banks and two supervisory authorities in the Netherlands. We found that for the investigated use cases a disparity exists between supervisory authorities and banks regarding the desired scope of explainability of AI systems. We argue that the financial sector could benefit from clear differentiation between technical AI (model) ex-plainability requirements and explainability requirements of the broader AI system in relation to applicable laws and regulations.
翻译:研究的目的是对监督机构和受监管实体在五审部门应用四审机构的观点进行初步调查,在荷兰的三家银行和两个监督机构进行半结构性访谈,审查三种使用案例(消费者信贷、信贷风险和反洗钱)。我们发现,在被调查的使用情况中,监督机构和银行之间在AI系统的可解释性方面存在着差异。我们主张,对AI(示范)例外性技术要求和更广泛的AI系统在适用法律和条例方面的可解释性要求作出明确区分,可使金融部门受益。