Prescriptive Process Monitoring (PresPM) recommends interventions during business processes to optimize key performance indicators (KPIs). In realistic settings, interventions are rarely isolated: organizations need to align sequences of interventions to jointly steer the outcome of a case. Existing PresPM approaches fall short in this respect. Many focus on a single intervention decision, while others treat multiple interventions independently, ignoring how they interact over time. Methods that do address these dependencies depend either on simulation or data augmentation to approximate the process to train a Reinforcement Learning (RL) agent, which can create a reality gap and introduce bias. We introduce SCOPE, a PresPM approach that learns aligned sequential intervention recommendations. SCOPE employs backward induction to estimate the effect of each candidate intervention action, propagating its impact from the final decision point back to the first. By leveraging causal learners, our method can utilize observational data directly, unlike methods that require constructing process approximations for reinforcement learning. Experiments on both an existing synthetic dataset and a new semi-synthetic dataset show that SCOPE consistently outperforms state-of-the-art PresPM techniques in optimizing the KPI. The novel semi-synthetic setup, based on a real-life event log, is provided as a reusable benchmark for future work on sequential PresPM.
翻译:规范性过程监测(PresPM)通过在业务流程中推荐干预措施来优化关键绩效指标(KPI)。在实际场景中,干预措施很少是孤立的:组织需要协调一系列干预措施,以共同引导案例的结果。现有的PresPM方法在这方面存在不足。许多方法侧重于单一干预决策,而其他方法则将多次干预视为独立事件,忽略了它们随时间推移的相互作用。确实处理这些依赖关系的方法要么依赖模拟,要么依赖数据增强来近似过程以训练强化学习(RL)智能体,这可能导致现实差距并引入偏差。我们提出了SCOPE,一种学习对齐序贯干预建议的PresPM方法。SCOPE采用逆向归纳法来估计每个候选干预行动的效果,将其影响从最终决策点反向传播至第一个决策点。通过利用因果学习器,我们的方法可以直接利用观测数据,这与需要为强化学习构建过程近似的方法不同。在现有合成数据集和新的半合成数据集上的实验表明,SCOPE在优化KPI方面始终优于最先进的PresPM技术。基于真实事件日志构建的新型半合成实验设置,被提供为一个可复用的基准,用于未来序贯PresPM的研究。