By means of a local surrogate approach, an analytical method to yield explanations of AI-predictions in the framework of regression models is defined. In the case of the AI-model producing additive corrections to the predictions of a base model, the explanations are delivered in the form of a shift of its interpretable parameters as long as the AI- predictions are small in a rigorously defined sense. Criteria are formulated giving a precise relation between lost accuracy and lacking model fidelity. Two applications show how physical or econometric parameters may be used to interpret the action of neural network and random forest models in the sense of the underlying base model. This is an extended version of our paper presented at the ISM 2020 conference, where we first introduced our new approach BAPC.
翻译:通过当地代用办法,界定了一种分析方法,在回归模型框架内解释大赦国际的特征;在AI模型对基准模型的预测进行添加性更正的情况下,只要AI预测在严格定义的意义上小,解释将以可解释参数的转变的形式作出;制定标准时将丧失的准确性和缺乏模型忠诚性精确地联系起来;两种应用表明,如何利用物理或计量经济学参数来解释基本模型意义上的神经网络和随机森林模型的行动;这是我们在2020年ISM会议上提出的文件的扩大版本,我们首次在2020年ISM会议上提出了新的BACPC方法。