In this paper, a novel framework is established for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks, including deep neural network (DNN) regression and neural operator learning (DeepONet). Specifically, we first employ the General Incompressible-Flow Networks (GIN) model to learn a "wide" distribution fromnoisy observation data. Then, following the information bottleneck objective, we learn a stochastic map from input to some latent representation that can be used to predict the output. A tractable variational bound on the IB objective is constructed with a normalizing flow reparameterization. Hence, we can optimize the objective using the stochastic gradient descent method. IB-UQ can provide both mean and variance in the label prediction by explicitly modeling the representation variables. Compared to most DNN regression methods and the deterministic DeepONet, the proposed model can be trained on noisy data and provide accurate predictions with reliable uncertainty estimates on unseen noisy data. We demonstrate the capability of the proposed IB-UQ framework via several representative examples, including discontinuous function regression, real-world dataset regression and learning nonlinear operators for diffusion-reaction partial differential equation.
翻译:在本文中,为通过信息瓶颈(IB-UQ)为科学机器学习任务,包括深神经网络(DNN)回归和神经操作员学习(DeepONet)的深度神经网络(DiepONet)的回归和神经操作员学习(DiepONet),为通过信息瓶颈(IB-UQ)为科学机器学习任务,通过信息瓶颈(IB-UQ)为信息瓶颈(IB目标)为通过信息瓶颈(IB-UQ)为信息瓶颈(ID-UQ)为科学机器学习任务进行不确定性量化而进行不确定性量化,建立了新的框架。具体地说,我们首先使用通用不压缩-Flow网络(GIN)模型(GIN) 来学习“宽大”分布的观测数据数据。然后,根据信息瓶颈(IB-UQ)为某些可用于预测输出而从输入的输入到某些潜在代表的图像,我们通过若干具有代表性的例子来展示IB-UQ框架的能力,其中包括不连续的反向式的回归、真实数据定式数据-定式等等等等等等等等的模型。