Explainable Artificial Intelligence (XAI) has mainly focused on static learning scenarios so far. We are interested in dynamic scenarios where data is sampled progressively, and learning is done in an incremental rather than a batch mode. We seek efficient incremental algorithms for computing feature importance (FI) measures, specifically, an incremental FI measure based on feature marginalization of absent features similar to permutation feature importance (PFI). We propose an efficient, model-agnostic algorithm called iPFI to estimate this measure incrementally and under dynamic modeling conditions including concept drift. We prove theoretical guarantees on the approximation quality in terms of expectation and variance. To validate our theoretical findings and the efficacy of our approaches compared to traditional batch PFI, we conduct multiple experimental studies on benchmark data with and without concept drift.
翻译:迄今为止,可解释的人工智能(XAI)主要侧重于静态学习情景。我们对动态情景感兴趣,这些动态情景是,数据是逐步抽样的,学习是以递增方式而不是批量方式进行的。我们寻求高效的递增算法,用于计算特别重要性(FI)措施,特别是基于与变异特征重要性相似的缺漏特征的特征边缘化特征的递增性FI措施。我们建议一种高效的、称为IPFI的模型-不可知性算法,用于在动态模型条件下(包括概念漂移)对这项措施进行递增和估算。我们证明在预期和差异方面对近似质量的理论保障。为了验证我们的理论发现和我们方法与传统的PIFI相比的有效性,我们进行了关于概念漂移和不移动的基准数据的多次实验研究。