Bayesian Likelihood-Free Inference methods yield posterior approximations for simulator models with intractable likelihood. Recently, many works trained neural networks to approximate either the intractable likelihood or the posterior directly. Most proposals use normalizing flows, namely neural networks parametrizing invertible maps used to transform samples from an underlying base measure; the probability density of the transformed samples is then accessible and the normalizing flow can be trained via maximum likelihood on simulated parameter-observation pairs. A recent work [Ramesh et al., 2022] approximated instead the posterior with generative networks, which drop the invertibility requirement and are thus a more flexible class of distributions scaling to high-dimensional and structured data. However, generative networks only allow sampling from the parametrized distribution; for this reason, Ramesh et al. [2022] follows the common solution of adversarial training, where the generative network plays a min-max game against a "critic" network. This procedure is unstable and can lead to a learned distribution underestimating the uncertainty - in extreme cases collapsing to a single point. Here, we propose to approximate the posterior with generative networks trained by Scoring Rule minimization, an overlooked adversarial-free method enabling smooth training and better uncertainty quantification. In simulation studies, the Scoring Rule approach yields better performances with shorter training time with respect to the adversarial framework.
翻译:最近,许多提案使用经过训练的神经网络,以近近似于难以捉摸的可能性或直接近似后遗症的模拟模型。最近,许多提案使用经过训练的神经网络,以近似于难测的可能性,或直接接近后遗症。大多数提案使用正常流,即神经网络,将用于从基本基量中转化样品的不可逆地图进行翻转;因此,可以获取经过转化的样品的概率密度,而且正常流可以通过模拟参数-观察对配方的最大可能性进行培训。最近的一项工作[Ramesh等人,2022] 与具有基因网络的后遗症相近,而后遗症则与基因网络相近,因此更灵活地将分配到高维度和结构化的数据。然而,基因网络只允许从模拟分布中取样;为此,Ramesh等人等人,[2022] 遵循了对抗性培训的共同解决办法,即基因化网络与“critictal”网络进行微量的游戏。这一程序不稳定,并可能导致对不确定性进行有学习性的分配,在极端情况下,通过经过训练后验测测测测测的模型,我们用一个单一的模型,用一种最精确的模型分析方法进行。