In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data. However, considerably less attention has been paid to explaining the predictions of opaque AI systems handling time series data. In this paper, we advance a novel model-agnostic, case-based technique -- Native Guide -- that generates counterfactual explanations for time series classifiers. Given a query time series, $T_{q}$, for which a black-box classification system predicts class, $c$, a counterfactual time series explanation shows how $T_{q}$ could change, such that the system predicts an alternative class, $c'$. The proposed instance-based technique adapts existing counterfactual instances in the case-base by highlighting and modifying discriminative areas of the time series that underlie the classification. Quantitative and qualitative results from two comparative experiments indicate that Native Guide generates plausible, proximal, sparse and diverse explanations that are better than those produced by key benchmark counterfactual methods.
翻译:近年来,人们日益重视解释处理图像和表格数据的黑盒AI系统所作的预测,然而,对解释处理时间序列数据的不透明的AI系统预测的关注却少得多。在本文件中,我们推广了一种新型的模型 -- -- 以案例为基础的技术 -- -- 土著指南,为时间序列分类者提供反事实解释。根据一个查询时间序列($TQQ}$),一个黑盒分类系统预测等级($c$),反事实时间序列解释显示$TQ}如何变化,因此系统预测替代类别($c'$)。拟议的实例技术通过突出和修改作为分类基础的时间序列中的歧视性领域,调整了案例数据库中现有的反事实情况。两个比较实验的定量和定性结果显示,土著指南产生合理、准氧化、稀少和多样化的解释,比关键基准反事实方法得出的解释更好。