Although a recent shift has been made in the field of predictive process monitoring to use models from the explainable artificial intelligence field, the evaluation still occurs mainly through performance-based metrics, thus not accounting for the actionability and implications of the explanations. In this paper, we define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction. The introduced properties are analysed along the event, case, and control flow perspective which are typical for a process-based analysis. This allows comparing inherently created explanations with post-hoc explanations. We benchmark seven classifiers on thirteen real-life events logs, and these cover a range of transparent and non-transparent machine learning and deep learning models, further complemented with explainability techniques. Next, this paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications, by providing insight into how the varying preprocessing, model complexity and explainability techniques typical in process outcome prediction influence the explainability of the model.
翻译:虽然在预测过程监测领域最近有所转变,将模型从可解释的人工情报领域转为使用模型,但评价仍然主要通过基于性能的衡量标准进行,因此没有考虑到解释的可操作性和影响。在本文件中,我们通过解释解释的解释和解释性模型在过程结果预测领域的可解释性来界定解释性。根据事件、案例和控制流程视角分析引入的属性,这是基于过程分析的典型特征。这样可以比较固有的解释与后热力分析后的解释。我们以13个实际活动日志作为7个分类者的基准,这些分类包括一系列透明和不透明的机器学习和深层学习模型,并辅之以解释性技术。接着,本文提供了一套称为X-MOP的指导方针,允许根据事件日志规范选择适当的模型,通过深入了解各种预处理、模型复杂程度以及流程结果预测中典型的解释性技术如何影响模型的可解释性。