Approximate Bayesian computation (ABC) is the most popular approach to inferring parameters in the case where the data model is specified in the form of a simulator. It is not possible to directly implement standard Monte Carlo methods for inference in such a model, due to the likelihood not being available to evaluate pointwise. The main idea of ABC is to perform inference on an alternative model with an approximate likelihood (the ABC likelihood), estimated at each iteration from points simulated from the data model. The central challenge of ABC is then to trade-off bias (introduced by approximating the model) with the variance introduced by estimating the ABC likelihood. Stabilising the variance of the ABC likelihood requires a computational cost that is exponential in the dimension of the data, thus the most common approach to reducing variance is to perform inference conditional on summary statistics. In this paper we introduce a new approach to estimating the ABC likelihood: using iterative ensemble Kalman inversion (IEnKI) (Iglesias, 2016; Iglesias et al., 2018). We first introduce new estimators of the marginal likelihood in the case of a Gaussian data model using the IEnKI output, then show how this may be used in ABC. Performance is illustrated on the Lotka-Volterra model, where we observe substantial improvements over standard ABC and other commonly-used approaches.
翻译:暂无翻译