Banks hold a societal responsibility and regulatory requirements to mitigate the risk of financial crimes. Risk mitigation primarily happens through monitoring customer activity through Transaction Monitoring (TM). Recently, Machine Learning (ML) has been proposed to identify suspicious customer behavior, which raises complex socio-technical implications around trust and explainability of ML models and their outputs. However, little research is available due to its sensitivity. We aim to fill this gap by presenting empirical research exploring how ML supported automation and augmentation affects the TM process and stakeholders' requirements for building eXplainable Artificial Intelligence (xAI). Our study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM. Context-relatable explanations can provide much-needed support for auditing and may diminish bias in the investigator's judgement. These results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM.
翻译:银行负有减少金融犯罪风险的社会责任和监管要求; 减少风险主要通过交易监测监测来监测客户活动; 最近,有人提议采用机器学习(ML)来查明可疑客户行为,这在信任和解释ML模式及其产出方面产生了复杂的社会-技术影响; 然而,由于研究的敏感性,几乎没有什么研究可资利用; 我们提出经验研究,探讨ML支持的自动化和增强如何影响TM进程和利益攸关方建立可复制人工智能(xAI)的要求,以填补这一空白; 我们的研究发现,xAI的要求取决于TM进程中的应负责任方,而后者的变化取决于TM的增强或自动化。 与背景相关的解释可以为审计提供急需的支持,并可能减少调查员判断中的偏差。 这些结果表明,xAI采用针对具体案例的方法,充分促进TM采用ML。