Effective quantification of uncertainty is an essential and still missing step towards a greater adoption of deep-learning approaches in different applications, including mission-critical ones. In particular, investigations on the predictive uncertainty of deep-learning models describing non-linear dynamical systems are very limited to date. This paper is aimed at filling this gap and presents preliminary results on uncertainty quantification for system identification with neural state-space models. We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs through approximate inference techniques. Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime, where predictions cannot be trusted.
翻译:有效量化不确定性是实现深度学习方法在不同应用中,包括任务关键型应用中,更广泛采用的重要而仍然缺失的一步。特别是,对于描述非线性动态系统的深度学习模型的预测不确定性的研究目前非常有限。本文旨在填补这一空白,介绍了神经状态空间模型在系统识别中的不确定性量化的初步结果。我们在贝叶斯概率框架下构建学习问题,并通过近似推理技术获得神经网络权重和输出的后验分布。基于后验分布,我们构建输出的置信区间,并定义一个惊奇指数,可以有效地诊断模型在可能危险的超出分布区间的情况下的使用,即预测是不能被信任的。