Fast inference of numerical model parameters from data is an important prerequisite to generate predictive models for a wide range of applications. Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive. New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space, and rely on gradient-based optimization instead of sampling, providing a more efficient approach for Bayesian inference about the model parameters. Moreover, the cost of frequently evaluating an expensive likelihood can be mitigated by replacing the true model with an offline trained surrogate model, such as neural networks. However, this approach might generate significant bias when the surrogate is insufficiently accurate around the posterior modes. To reduce the computational cost without sacrificing inferential accuracy, we propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and surrogate model parameters. We also propose an efficient sample weighting scheme for surrogate model training that preserves global accuracy while effectively capturing high posterior density regions. We demonstrate the inferential and computational superiority of NoFAS against various benchmarks, including cases where the underlying model lacks identifiability. The source code and numerical experiments used for this study are available at https://github.com/cedricwangyu/NoFAS.
翻译:从数据中快速推断数字模型参数是产生广泛应用的预测模型的重要先决条件。使用Markov链 Monte Carlo等基于抽样的方法,在每次概率评估都计算费用昂贵时,可能会变得棘手。将变异推断与正常流动相结合的新方法的特点是计算成本,这种成本仅随着潜在变数空间的维度而线性增长,并且依靠梯度优化而不是抽样,为巴伊西亚人对模型参数的正常推断提供一种更有效的方法。此外,经常评估昂贵可能性的成本可以通过以非线外训练的代金模型取代真实模型,如神经网络等而降低。然而,如果代金在后方模式周围的代金结构不够准确,这种办法可能会产生重大偏差。为了降低计算成本,同时又不牺牲潜在变数的准确性,我们提议采用调适的表面表面优化战略,以更新正常流参数和代金模型参数。我们还提议一个高效的样本加权计划,用于代金模型培训,以维护全球正比值准确性模型,同时有效测量高估度的后方位标准。我们用此方法对高端/底层标准进行计算。