A trustworthy machine learning model should be accurate as well as explainable. Understanding why a model makes a certain decision defines the notion of explainability. While various flavors of explainability have been well-studied in supervised learning paradigms like classification and regression, literature on explainability for time series forecasting is relatively scarce. In this paper, we propose a feature-based explainability algorithm, TsSHAP, that can explain the forecast of any black-box forecasting model. The method is agnostic of the forecasting model and can provide explanations for a forecast in terms of interpretable features defined by the user a prior. The explanations are in terms of the SHAP values obtained by applying the TreeSHAP algorithm on a surrogate model that learns a mapping between the interpretable feature space and the forecast of the black-box model. Moreover, we formalize the notion of local, semi-local, and global explanations in the context of time series forecasting, which can be useful in several scenarios. We validate the efficacy and robustness of TsSHAP through extensive experiments on multiple datasets.
翻译:一个值得信赖的机器学习模型应该既准确又可解释。理解模型为何做出某个决策定义了解释性的概念。虽然关于分类和回归等监督学习范式的各种解释性已经得到了广泛研究,但是关于时间序列预测的解释性文献相对较少。在本文中,我们提出了一种基于特征的解释性算法TsSHAP,它可以解释任何黑匣子预测模型的预测结果。该方法与预测模型无关,可以根据用户先验定义的可解释特征解释预测结果。解释是通过在学习将可解释特征空间与黑匣子模型预测结果映射的替代模型上应用TreeSHAP算法获得的SHAP值来实现的。此外,我们在时间序列预测的上下文中,形式化了局部,半局部和全局解释性的概念,可用于多种场景。我们通过在多个数据集上进行大量实验,验证了TsSHAP的效果和鲁棒性。