Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers. To this end, we propose the task of feature attribution modelling that we address with Empirical Explainers. Empirical Explainers learn from data to predict the attribution maps of expensive explainers. We train and test Empirical Explainers in the language domain and find that they model their expensive counterparts well, at a fraction of the cost. They could thus mitigate the computational burden of neural explanations significantly, in applications that tolerate an approximation error.
翻译:在关于绿色AI的讨论过程中,我们看到忽略了解释性,我们探讨了有效估计计算费用昂贵的解释者的可能性。为此,我们提议与经验型解释者进行特征归属建模。经验型解释者从数据中学习预测昂贵解释者归属图。我们在语言领域培训和测试经验型解释者,发现他们用成本的一小部分来模拟其昂贵的对应方。因此,他们可以大大减轻在容忍近似错误的应用中神经解释的计算负担。