We propose a framework for Bayesian Likelihood-Free Inference (LFI) based on Generalized Bayesian Inference. To define the generalized posterior, we use Scoring Rules (SRs), which evaluate probabilistic models given an observation. In LFI, we can sample from the model but not evaluate the likelihood; for this reason, we employ SRs which admit unbiased empirical estimates. We use the Energy and the Kernel SRs, for which our posterior enjoys consistency in a well-specified setting and outlier robustness, but our general framework applies to other SRs. The straightforward way to perform posterior inference relies on pseudo-marginal Markov Chain Monte Carlo (MCMC). While this works satisfactorily for simple setups, it mixes poorly, which makes inference impossible when many observations are present. Hence, we employ stochastic-gradient (SG) MCMC methods, which are rejection-free and have thus no mixing issues. The targets of both sampling schemes only approximate our posterior, but the error vanishes as the number of model simulations at each MCMC step increases. In practice, SG-MCMC performs better than pseudo-marginal at a lower computational cost when both are applicable and scales to higher-dimensional setups. In our simulation studies, we compare with related approaches on standard benchmarks and a chaotic dynamical system from meteorology; for the latter, SG-MCMC allows us to infer the parameters of a neural network used to parametrize a part of the update equations of the dynamical system.
翻译:暂无翻译