Estimating the parameters of mathematical models is a common problem in almost all branches of science. However, this problem can prove notably difficult when processes and model descriptions become increasingly complex and an explicit likelihood function is not available. With this work, we propose a novel method for globally amortized Bayesian inference based on invertible neural networks which we call BayesFlow. The method uses simulation to learn a global estimator for the probabilistic mapping from observed data to underlying model parameters. A neural network pre-trained in this way can then, without additional training or optimization, infer full posteriors on arbitrary many real datasets involving the same model family. In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics. Learning summary statistics from data makes the method applicable to modeling scenarios where standard inference techniques with hand-crafted summary statistics fail. We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science and ecology. We argue that BayesFlow provides a general framework for building amortized Bayesian parameter estimation machines for any forward model from which data can be simulated.
翻译:估算数学模型参数是几乎所有科学分支中常见的一个常见问题。然而,当过程和模型描述变得日益复杂,而且没有明显的可能性功能时,这一问题可能会证明特别困难。我们通过这项工作,提出了一种基于我们称为BayesFlow的不可垂直神经网络。我们称之为BayesFlow,用模拟方法学习从观测数据到基本模型参数参数的概率绘图全球估计器。这样经过预先培训的神经网络可以在不额外培训或优化的情况下,在没有额外培训或优化的情况下,在涉及同一模型家族的任意许多真实数据集中,将完整的足部外星推入任意的、涉及同一模型家族的许多真实数据集。此外,我们的方法可能证明问题特别困难。此外,我们的方法还包括一个经过训练的将观测到的数据纳入信息最丰富的摘要统计的简要网络。从数据中学习摘要统计数据使适用于模型假设情景的方法适用于在那些标准引引技术与手手制摘要统计失败的情况下,我们演示BayesFlow在人口动态、流行病学、认知科学和生态等具有挑战性棘手的精确模型的模型的效用。我们说,BayesFlow可以提供一个用于任何前方数据的模型的通用框架框架,用于任何前方数据。我们说BayesFlow提供模型的模型的模型,可以从任何前方数据。