As data-driven intelligent systems advance, the need for reliable and transparent decision-making mechanisms has become increasingly important. Therefore, it is essential to integrate uncertainty quantification and model explainability approaches to foster trustworthy business and operational process analytics. This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches, such as Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots. In addition, this study examines appropriate visualization analytics approaches to facilitate such methodological integration. By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability. Finally, the study includes expert interviews to assess the suitability of the proposed approach and designed interface for a real-world predictive process monitoring problem in the manufacturing domain.
翻译:随着数据驱动的智能系统的发展,可靠和透明的决策机制变得越来越重要。因此,整合不确定性量化和模型可解释性方法是促进可信赖的业务和操作过程分析的关键。本研究探讨了如何在全局和局部后验解释方法中有效传达模型不确定性,如部分依赖图(PDP)和个体条件期望(ICE)图。此外,本研究还研究了适当的可视化分析方法,以促进这种方法论整合。通过结合这两个研究方向,决策者不仅可以证明基于解释的可操作性见解的合理性,而且还可以验证其可靠性。最后,本研究包括专家访谈,以评估所提出的方法和设计的界面在制造领域的真实预测性过程监控问题中的适用性。