For many interdisciplinary fields, ML interpretations need to be consistent with what-if scenarios related to the current case, i.e., if one factor changes, how does the model react? Although the attribution methods are supported by the elegant axiomatic systems, they mainly focus on individual inputs, and are generally inconsistent. To support what-if scenarios, we introduce a new notion called truthful interpretation, and apply Fourier analysis of Boolean functions to get rigorous guarantees. Experimental results show that for neighborhoods with various radii, our method achieves 2x - 50x lower interpretation error compared with the other methods.
翻译:对于许多跨学科领域,ML解释需要符合与当前情况相关的假设情景,即如果一个因素发生变化,模型如何反应?虽然归属方法得到优雅的Axismatic系统的支持,但主要侧重于个人投入,而且通常不一致。为了支持什么假设情景,我们引入了一个新的概念,称为真实解释,并应用Fourier对Boolean函数的分析以获得严格的保障。实验结果显示,对于不同弧度的邻里而言,我们的方法比其他方法低2x50倍。