Explainable Artificial Intelligence (XAI) focuses mainly on batch learning scenarios. In the static learning tasks, various XAI methods, like SAGE, have been proposed that distribute the importance of a model on its input features. However, models are often applied in ever-changing dynamic environments like incremental learning. As a result, we propose iSAGE as a direct incrementalization of SAGE suited for dynamic learning environments. We further provide an efficient approximation method to model feature removal based on the conditional data distribution in an incremental setting. We formally analyze our explanation method to show that it is an unbiased estimator and construct confidence bounds for the point estimates. Lastly, we evaluate our approach in a thorough experimental analysis based on well-established data sets and concept drift streams.
翻译:可解释的人工智能(XAI)主要侧重于批量学习情景。在静态学习任务中,提出了各种 XAI 方法,如SAGE, 以分配模型在输入特征上的重要性。然而,模型往往在不断变化的动态环境中应用,如渐进学习。结果,我们建议iSAGE作为适合动态学习环境的 SAGE 的直接递增。我们进一步提供了一种高效近似方法,以基于有条件数据在递增环境中的分布来模拟特征清除。我们正式分析了我们的解释方法,以表明它是一个公正的估算方,为点估算建立信任界限。最后,我们评估了我们根据完善的数据集和概念流进行彻底实验分析的方法。</s>