Approximate Bayesian Computation (ABC) enables statistical inference in complex models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via classification. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.
翻译:近似Bayesian Computation (ABC) 能够对难以计算但易于模拟的复杂模型进行统计推断。 ABC通过一个接受/ 拒绝机制,比较真实和模拟数据的简要统计数据,构建了对后方分布的内核型近似。 为避免对简要统计数据的需要,我们直接将经验分布与通过分类获得的Kullback- Leiber(KL)差异估计器进行对比。特别是,我们将ABC内部的灵活机器学习分类器与假数据/真实数据的自动比较混为一谈。我们认为传统的接受/射入内核以及不要求ABC接受阈值的指数加权计划是有用的。我们的理论结果表明,我们的ABC后方分布集中于真实参数的速率取决于精度的估算误差。 我们限制后方形状结果并发现,在适当缩放的指数内核中,不具有规律性。 我们展示了我们对模拟示例以及股票波动估计中真实数据的方法是有用的。