Recently, Neural Ordinary Differential Equations has emerged as a powerful framework for modeling physical simulations without explicitly defining the ODEs governing the system, but instead learning them via machine learning. However, the question: "Can Bayesian learning frameworks be integrated with Neural ODE's to robustly quantify the uncertainty in the weights of a Neural ODE?" remains unanswered. In an effort to address this question, we primarily evaluate the following categories of inference methods: (a) The No-U-Turn MCMC sampler (NUTS), (b) Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) and (c) Stochastic Langevin Gradient Descent (SGLD). We demonstrate the successful integration of Neural ODEs with the above Bayesian inference frameworks on classical physical systems, as well as on standard machine learning datasets like MNIST, using GPU acceleration. On the MNIST dataset, we achieve a posterior sample accuracy of 98.5% on the test ensemble of 10,000 images. Subsequently, for the first time, we demonstrate the successful integration of variational inference with normalizing flows and Neural ODEs, leading to a powerful Bayesian Neural ODE object. Finally, considering a predator-prey model and an epidemiological system, we demonstrate the probabilistic identification of model specification in partially-described dynamical systems using universal ordinary differential equations. Together, this gives a scientific machine learning tool for probabilistic estimation of epistemic uncertainties.
翻译:最近,神经普通差异作为模拟物理模拟的强大框架已经出现,没有明确界定管理该系统的代码,而是通过机器学习来学习。然而,“Can Bayesian学习框架与神经代码集成,以强有力地量化神经代码重量的不确定性?” 问题仍未被解答。为了解决这一问题,我们主要评估了以下几类推断方法:(a) 无U-Turn MCMC样本(NUTS),(b) Stochatical Grabitical Hamiltonian Monte Carlo(SGMC)和(c) Stochatical Langevin Gradient Empro(SGLDD) 。然而,我们展示了神经代码与上面的Bayesian经典物理系统以及像MNIST这样的标准机器学习数据集的成功整合。在MMIST数据集模型中,我们在10 000张图像测试堆中实现了98.5%的海面样本精确度(98.5%)和(Stachastecistrial)科学模型中,我们随后展示了正常货币正正值的系统,我们最终展示了正常的系统。