We propose a framework for Bayesian Likelihood-Free Inference (LFI) based on Generalized Bayesian Inference. To define the generalized posterior, we use Scoring Rules (SRs), which evaluate probabilistic models given an observation. As in LFI we can sample from the model (but not evaluate the likelihood), we employ SRs with easy empirical estimators. Our framework includes novel approaches and popular LFI techniques (such as Bayesian Synthetic Likelihood), which benefit from the generalized Bayesian interpretation. Our method enjoys posterior consistency in a well-specified setting when a strictly-proper SR is used (i.e., one whose expectation is uniquely minimized when the model corresponds to the data generating process). Further, we prove a finite sample generalization bound and outlier robustness for the Kernel and Energy Score posteriors, and propose a strategy suitable for the LFI setup for tuning the learning rate in the generalized posterior. We run simulations studies with pseudo-marginal Markov Chain Monte Carlo (MCMC) and compare with related approaches, which we show do not enjoy robustness and consistency.
翻译:我们根据普遍化的贝耶斯语推论,提出了贝耶斯河沿岸无污染推论框架(LFI),为贝耶斯河以普遍化的贝耶斯河沿岸推论为基础,提出了贝耶斯河沿岸无污染推论框架(LFI)。为了界定普遍化的后方,我们使用Scoring 规则(SRs)来评估概率模型。正如在LFI中,我们可以从模型中抽样(但没有评估可能性),我们使用简单化的估测器,我们使用简易化的贝耶斯河沿岸推论(LFI),为贝耶斯河以普遍化的判读法(例如Bayesian Syntheticariclienne)提供了新的方法和流行的LFIFI技术(例如Bayesian Synthetricle),我们的方法在精确的设置中享有后方一致性(即当模型与数据生成过程相匹配时期望最小化)。此外,我们证明Kernel和能源分数的后方为有限性,并提出适合LFIFIFI在普通海边海脊山脉上调整学习率的战略。我们用假的模拟研究,我们并不具有一致性和强性。