Bayesian Likelihood-Free Inference (LFI) approaches allow to obtain posterior distributions for stochastic models with intractable likelihood, by relying on model simulations. In Approximate Bayesian Computation (ABC), a popular LFI method, summary statistics are used to reduce data dimensionality. ABC algorithms adaptively tailor simulations to the observation in order to sample from an approximate posterior, whose form depends on the chosen statistics. In this work, we introduce a new way to learn ABC statistics: we first generate parameter-simulation pairs from the model independently on the observation; then, we use Score Matching to train a neural conditional exponential family to approximate the likelihood. The exponential family is the largest class of distributions with fixed-size sufficient statistics; thus, we use them in ABC, which is intuitively appealing and has state-of-the-art performance. In parallel, we insert our likelihood approximation in an MCMC for doubly intractable distributions to draw posterior samples. We can repeat that for any number of observations with no additional model simulations, with performance comparable to related approaches. We validate our methods on toy models with known likelihood and a large-dimensional time-series model.
翻译:Bayesian Limith-free Inference (LFI) 方法允许通过模型模拟,获得具有难测可能性的随机模型的后部分布。 在流行的LFI方法Apbear Bayesian Computation (ABC) 中,使用摘要统计来减少数据维度。ABC 算法对观察进行适应性定制模拟,以便从大约的后部采集样本,其形式取决于所选的统计。在这项工作中,我们采用了一种新的方法来学习ABC统计数据:我们首先独立地从模型中生成参数模拟配对,然后,我们用计分匹配来训练一个神经性附条件的指数式大家庭来估计可能性。指数式家庭是使用固定尺寸充足统计数据的最大分布类别;因此,我们在ABC中使用它们,它具有直觉的吸引力,具有最新性能。同时,我们将我们的可能性近似似近的模型贴在MCMC中,以便绘制离子体样本。我们可以重复在任何没有已知的模型模型模拟和大规模模拟的模型中,我们可比较的方法。