Sequential neural posterior estimation (SNPE) techniques have been recently proposed for dealing with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators. This paper reclaims SNPE-B proposed by Lueckmann et al. (2017), which suffers from inefficiency and slow inference due to inefficient utilization of simulated data and high variance of parameter updates. To address these issues, we firstly introduce a concentrated loss function based on an adaptive calibration kernel that reweights the simulated data appropriately to improve the data efficiency. Moreover, we provide a theoretical analysis of the variance of associated Monte Carlo estimators. Based on this analysis, we then propose several variance reduction techniques to further accelerate the process of learning. Numerical experiments demonstrate that our method outperforms the original method together with other existing competitors on certain tasks.
翻译:暂无翻译