The inability of artificial neural networks to assess the uncertainty of their predictions is an impediment to their widespread use. We distinguish two types of learnable uncertainty: model uncertainty due to a lack of training data and noise-induced observational uncertainty. Bayesian neural networks use solid mathematical foundations to learn the model uncertainties of their predictions. The observational uncertainty can be calculated by adding one layer to these networks and augmenting their loss functions. Our contribution is to apply these uncertainty concepts to predictive process monitoring tasks to train uncertainty-based models to predict the remaining time and outcomes. Our experiments show that uncertainty estimates allow more and less accurate predictions to be differentiated and confidence intervals to be constructed in both regression and classification tasks. These conclusions remain true even in early stages of running processes. Moreover, the deployed techniques are fast and produce more accurate predictions. The learned uncertainty could increase users' confidence in their process prediction systems, promote better cooperation between humans and these systems, and enable earlier implementations with smaller datasets.
翻译:人工神经网络无法评估其预测的不确定性是妨碍其广泛使用的障碍。我们区分了两类可学习的不确定性:由于缺乏培训数据和噪音引起的观测不确定性,模型不确定性是缺乏培训数据以及噪音引起的不确定性。拜耳神经网络使用坚实的数学基础来学习其预测的不确定性模型。观测不确定性可以通过在这些网络中增加一层来计算,并增加其损失功能。我们的贡献是将这些不确定性概念应用于预测过程监测任务,以培训基于不确定性的模型来预测剩余时间和结果。我们的实验表明,不确定性估计数使得在回归和分类任务中都能够进行更多和更少的准确的预测,并且能够建立信任间隔。这些结论即使在运行过程的早期阶段也是真实的。此外,所应用的技术是快速的,可以产生更准确的预测。所学的不确定性可以提高用户对其过程预测系统的信心,促进人类和这些系统之间更好的合作,并且能够以较小的数据集提前实施。