Neural Stochastic Differential Equations (NSDEs) model the drift and diffusion functions of a stochastic process as neural networks. While NSDEs are known to make accurate predictions, their uncertainty quantification properties have been remained unexplored so far. We report the empirical finding that obtaining well-calibrated uncertainty estimations from NSDEs is computationally prohibitive. As a remedy, we develop a computationally affordable deterministic scheme which accurately approximates the transition kernel, when dynamics is governed by a NSDE. Our method introduces a bidimensional moment matching algorithm: vertical along the neural net layers and horizontal along the time direction, which benefits from an original combination of effective approximations. Our deterministic approximation of the transition kernel is applicable to both training and prediction. We observe in multiple experiments that the uncertainty calibration quality of our method can be matched by Monte Carlo sampling only after introducing high computational cost. Thanks to the numerical stability of deterministic training, our method also improves prediction accuracy.
翻译:神经物理差异(NSDEs) 模型是神经网络中随机过程的漂移和扩散功能的模型。 虽然已知NSDEs可以作出准确的预测,但其不确定性量化特性迄今仍未被探索。 我们报告的经验发现,从NSDEs获得经充分校准的不确定性估计是计算上令人望而生畏的。作为一种补救措施,我们开发了一种计算上可承受的确定方法,在动态由NSDE(NSDE)管理的情况下,精确地接近过渡核心。我们的方法引入了一种双维匹配算法:在神经网层和水平上沿时间方向垂直进行匹配,这得益于有效近似的原始组合。我们过渡核心的确定性近似可适用于培训和预测。我们观察到,在多项实验中,我们的方法的不确定性校准质量只有在引入高计算成本之后才能与蒙特卡洛取样相匹配。由于确定性培训的数字稳定性,我们的方法也提高了预测的准确性。