We introduce a family of interpretable machine learning models, with two broad additions: Linearised Additive Models (LAMs) which replace the ubiquitous logistic link function in General Additive Models (GAMs); and SubscaleHedge, an expert advice algorithm for combining base models trained on subsets of features called subscales. LAMs can augment any additive binary classification model equipped with a sigmoid link function. Moreover, they afford direct global and local attributions of additive components to the model output in probability space. We argue that LAMs and SubscaleHedge improve the interpretability of their base algorithms. Using rigorous null-hypothesis significance testing on a broad suite of financial modelling data, we show that our algorithms do not suffer from large performance penalties in terms of ROC-AUC and calibration.
翻译:我们引入了一套可解释的机器学习模型,其中有两个广泛的补充:线性Additive模型(LAMs),取代一般Additive模型(GAMS)中普遍存在的后勤联系功能;和SubsicalHedge,这是一个专家咨询算法,用于结合根据子规模特征子集培训的基础模型;LAMs可以补充任何配有像样链接功能的添加二进制分类模型;此外,它们提供模型输出概率空间的添加成分的直接全球和地方属性。我们争辩说,LAMs和子尺度介质提高了其基本算法的可解释性。我们用一套广泛的金融模型数据进行严格的无假基意义测试,我们表明我们的算法在ROC-AUC和校准方面并没有受到很大的性能惩罚。