As machine learning and deep learning models have become highly prevalent in a multitude of domains, the main reservation in their adoption for decision-making processes is their black-box nature. The Explainable Artificial Intelligence (XAI) paradigm has gained a lot of momentum lately due to its ability to reduce models opacity. XAI methods have not only increased stakeholders' trust in the decision process but also helped developers ensure its fairness. Recent efforts have been invested in creating transparent models and post-hoc explanations. However, fewer methods have been developed for time series data, and even less when it comes to multivariate datasets. In this work, we take advantage of the inherent interpretability of shapelets to develop a model agnostic multivariate time series (MTS) counterfactual explanation algorithm. Counterfactuals can have a tremendous impact on making black-box models explainable by indicating what changes have to be performed on the input to change the final decision. We test our approach on a real-life solar flare prediction dataset and prove that our approach produces high-quality counterfactuals. Moreover, a comparison to the only MTS counterfactual generation algorithm shows that, in addition to being visually interpretable, our explanations are superior in terms of proximity, sparsity, and plausibility.
翻译:随着机器学习和深层次学习模式在多个领域变得非常普遍,在采用这些模式用于决策过程方面的主要保留是其黑箱性质。解释性人工智能(XAI)模式最近由于能够减少模型不透明性而获得了很大的势头。XAI方法不仅提高了利益攸关方对决策过程的信任,而且帮助开发者确保了它公平性。最近努力创建了透明的模型和热后解释。然而,为时间序列数据开发的方法较少,在多变数据集方面则更少。在这项工作中,我们利用形体的内在可解释性来开发一个模型性多变多变时间序列(MTS)反事实解释算法。反事实不仅提高了利益攸关方对决策过程的信任,而且帮助开发者确保了这种模型的公正性。最近,我们在创建透明模型和热后解释方面已经投入投入投入投入投入投入投入了很大,我们测试了我们的方法,在真实生活中的太阳耀斑预测数据集方面,并且证明我们的方法产生了高品质的反事实性。此外,我们利用形状的内在解释工具来开发模型的内在可解释性多变式多变数,在视觉上显示高可变相判读性。