Physics-informed deep learning have recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the overparameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In [1], a Bayesian framework based on the {{Generative Adversarial Networks}} (GAN) has been proposed as a unified model to quantify uncertainties in predictions of PINNs as well as DeepONets. Specifically, the proposed approach in [1] has two stages: (1) prior learning, and (2) posterior estimation. At the first stage, the GANs are employed to learn a functional prior either from a prescribed function distribution, e.g., Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference, which naturally enables the minibatch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional Darcy problem, are conducted to demonstrate that NF with full-/mini-batch training are able to achieve similar accuracy as the ``gold rule'' HMC.
翻译:物理知情深层学习最近成为利用观测数据和现有物理法律的有效工具。物理知情神经网络(PINN)和深操作网络(DeepONets)是两种此类模式。前者通过自动区分编码物理法,而后者则从数据中学习隐藏的物理。一般而言,由于观测数据噪音和有限,以及神经网络中的超度分化,导致从深层学习模型的预测产生不确定性。在[1]中,基于“Geneeeral adversarial nets ” (GAN)的巴耶斯最低框架被提议为一种统一模型,用以量化预测PINN和DeepONets(DeepONets)的预测中的不确定性。具体地说,拟议的[1]物理法有两个阶段:(1) 先前学习,和(2) 远端估计。在第一阶段,GANs的观测数据以及神经网络网络的超前功能学习,要么从规定的功能分布,例如Gaughsian 进程,或者从历史数据和可用的物理学。在第二阶段,汉密尔顿蒙特-Carlo(HC)的逻辑背景中,其潜值数据应用方法用来估计GAN的潜值数据,这在GAN的模拟培训中,这组的模型中可以完全地展示。