In this paper, we address the challenge of Markov Chain Monte Carlo algorithms within the Approximate Bayesian Computation framework, which often get trapped in local optima due to their inherent local exploration mechanism. We propose a novel Global-Local ABC-MCMC algorithm that combines the "exploration" capabilities of global proposals with the "exploitation" finesse of local proposals. By integrating iterative importance resampling into the likelihood-free framework, we establish an effective global proposal distribution. We select the optimum mixture of global and local moves based on a unit cost version of expected squared jumped distance via sequential optimization. Furthermore, we propose two adaptive schemes: The first involves a normalizing flow-based probabilistic distribution learning model to iteratively improve the proposal for importance sampling, and the second focuses on optimizing the efficiency of the local sampler by utilizing Langevin dynamics and common random numbers. We numerically demonstrate that our method improves sampling efficiency and achieve more reliable convergence for complex posteriors. A software package implementing this method is available at https://github.com/caofff/GL-ABC-MCMC.
翻译:暂无翻译