To perform Bayesian inference for stochastic simulator models for which the likelihood is not accessible, Likelihood-Free Inference (LFI) relies on simulations from the model. Standard LFI methods can be split according to how these simulations are used: to build an explicit Surrogate Likelihood, or to accept/reject parameter values according to a measure of distance from the observations (Approximate Bayesian Computation (ABC)). In both cases, simulations are adaptively tailored to the value of the observation. Here, we generate parameter-simulation pairs from the model independently on the observation, and use them to learn a conditional exponential family likelihood approximation; to parametrize it, we use Neural Networks whose weights are tuned with Score Matching. With our likelihood approximation, we can employ MCMC for doubly intractable distributions to draw samples from the posterior for any number of observations without additional model simulations, with performance competitive to comparable approaches. Further, the sufficient statistics of the exponential family can be used as summaries in ABC, outperforming the state-of-the-art method in five different models with known likelihood. Finally, we apply our method to a challenging model from meteorology.
翻译:为了对不可能获得的随机模拟模型进行巴耶斯测谎推断,根据模拟模型的模拟模型进行可能无法获取的贝耶斯模拟模型的误判,隐性无误推断(LFI)依赖于模拟模型的模拟。标准 LFI 方法可以按照这些模拟方法的使用方式进行分割: 建立一个清晰的代数隐性模型, 或根据与观测的距离的尺度( 靠近巴伊西亚计算( ABC) 接受/ 反射参数值值 ) 。 在这两种情况下, 模拟是适应性地根据观察的价值量度定制的。 在此情况下, 我们从模型独立地生成参数模拟配对, 并使用它们来学习一个有条件的指数家庭概率近似近; 为了对它进行配对, 我们使用神经网络, 其重量与评分匹配的比对。 我们有可能使用双向的粘合器, 用于从远端的后方分布, 从任何数量的观测中提取样本, 而没有额外的模型模拟, 与可比较的方法。 此外, 指数家族的充足统计数据可以在ABC 中作为摘要使用,, 超越我们所了解的状态- 不同模型中采用的方法。