In recent years, the field of machine learning has made phenomenal progress in the pursuit of simulating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification in scientific inverse problems. We introduce UQ-VAE: a flexible, adaptive, hybrid data/model-informed framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown parameter of interest. Specifically, from divergence-based variational inference, our framework is derived such that most of the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty.
翻译:近年来,机器学习领域在模拟真实世界数据生成过程中取得了惊人的进展,这种成功的一个显著例子是变式自动编码器(VAE),在这项工作中,随着视角的微小变化,我们为不同的目的利用和改造VAE:科学反向问题的不确定性量化。我们引入了UQ-VAE:一个灵活、适应性强、混合数据/模范信息化框架,用于培训神经网络,能够快速模拟代表未知兴趣参数的后方分布。具体地说,根据基于差异的变异推论,我们的框架衍生出通常存在于科学反常问题中的大多数信息在培训程序中都得到充分利用。此外,这个框架包括一个可调整的超参数,允许选择后方模型和目标分布之间的距离概念。这为控制优化引导后方模型的学习带来了更大的灵活性。此外,这个框架具有一种内在的适应性优化属性,通过学习后方不确定性而产生。