Of late, in order to have better acceptability among various domain, researchers have argued that machine intelligence algorithms must be able to provide explanations that humans can understand causally. This aspect, also known as causability, achieves a specific level of human-level explainability. A specific class of algorithms known as counterfactuals may be able to provide causability. In statistics, causality has been studied and applied for many years, but not in great detail in artificial intelligence (AI). In a first-of-its-kind study, we employed the principles of causal inference to provide explainability for solving the analytical customer relationship management (ACRM) problems. In the context of banking and insurance, current research on interpretability tries to address causality-related questions like why did this model make such decisions, and was the model's choice influenced by a particular factor? We propose a solution in the form of an intervention, wherein the effect of changing the distribution of features of ACRM datasets is studied on the target feature. Subsequently, a set of counterfactuals is also obtained that may be furnished to any customer who demands an explanation of the decision taken by the bank/insurance company. Except for the credit card churn prediction dataset, good quality counterfactuals were generated for the loan default, insurance fraud detection, and credit card fraud detection datasets, where changes in no more than three features are observed.
翻译:最近,为了在不同领域得到更好的可接受性,研究人员认为机器情报算法必须能够提供人类能够理解因果问题的解释。这个方面,又称为因果关系,可以达到人类层面的具体解释程度。一个被称为反事实的具体算法类别也许能够提供因果关系。在统计数据中,因果关系已经研究并应用了许多年,但没有在人工智能中非常详细(AI)。在一项首项研究中,我们采用了因果推断原则,为解决分析性客户关系管理(ACRM)问题提供解释性解释性。在银行和保险方面,目前关于可解释性的研究试图解决与因果关系有关的问题,例如为什么这种模式作出这样的决定,以及模型的选择是否受到特定因素的影响。我们提出了一个解决办法,即干预形式,其中对改变ACRM数据集特征分布的影响进行了研究(AI)。随后,我们采用了因果推论原则,为任何客户提供了一套反事实事实事实事实事实数据,在银行/保险中要求解释对信用质量作出更好的预测时,对银行/保险数据进行了更多的检测。