Recently, a shift has been made in the field of Outcome-Oriented Predictive Process Monitoring (OOPPM) to use models from the eXplainable Artificial Intelligence paradigm, however the evaluation still occurs mainly through performance-based metrics not accounting for the implications and lack of actionability of the explanations. In this paper, we define explainability by the interpretability of the explanations (through the widely-used XAI properties parsimony and functional complexity) and the faithfulness of the explainability model (through monotonicity and level of disagreement). The introduced properties are analysed along the event, case, and control flow perspective that are typical of a process-based analysis. This allows to quantitatively compare, inter alia, inherently created explanations (e.g., logistic regression coefficients) with post-hoc explanations (e.g., Shapley values). Moreover, this paper contributes a guideline named X-MOP to practitioners to select the appropriate model based on the event log specifications and the task at hand, by providing insight into how the varying preprocessing, model complexity and post-hoc explainability techniques typical in OOPPM influence the explainability of the model. To this end, we benchmark seven classifiers on thirteen real-life events logs.
翻译:最近,在注重结果的预测过程监测(OOPPM)领域,出现了一种转变,将模型从可变人工智能模式的模式中改用,但评价仍然主要通过基于业绩的衡量标准进行,没有说明解释的影响和缺乏可操作性;在本文件中,我们通过解释的解释(通过广泛使用的 XAI 特性的简单性和功能复杂性)和解释性模式的忠诚(通过单一性与分歧程度),对引入的特性进行分析,同时分析典型的基于过程的分析所特有的事件、案例和控制流程视角,从而除其他外,能够将基于业绩的解释(例如后勤回归系数)与事后解释(例如Shapley值)进行定量比较;此外,本文件还提出了名为X-MOPO的准则,供从业人员根据事件日志规格和手头的任务选择适当的模型,通过深入了解不同预处理前、模型的复杂性和后可操作性解释技术如何影响OOPPMR典型的13性模型的实际寿命基准。