Approximate Bayesian Computation (ABC) enables statistical inference in simulator-based models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via contrastive learning. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.
翻译:以模拟模型为基础的模拟模型,其可能性难以计算,但易于模拟。ABC通过一个接受/拒绝机制,比较真实和模拟数据的简要统计数据,构建了对后体分布的内核型近似值。为了避免对简要统计数据的需要,我们直接将经验分布与通过对比学习获得的Kullback-Leber(KL)差异估计值进行比较。特别是,我们把ABC内部的灵活机器学习分类器与自动建立假数据/真实数据比较相结合。我们认为传统的接受/弹出内核以及不要求ABC接受阈值的指数加权法是有用的。我们的理论结果表明,我们的ABC后体分布集中在真实参数周围的速率取决于分类器的估计误差。我们得出后体形状结果的限制,并发现,在适当缩放的指数内核,惯性正常性保持不变。我们展示了模拟示例的实用性,作为股票波动估计中的真实数据。