We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference in implicit models. Thus, SNPLA is a simulation-based inference method that only requires simulations from a generative model. Compared to similar methods, the main advantage of SNPLA is that our method jointly learns both the posterior and the likelihood. SNPLA completely avoid Markov chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. Over four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even though the inference problem for SNPLA is more complex due to the joint learning of posterior and likelihood function.
翻译:我们引入了相继神经后视和概率近似算法。 SNPLA 是一种基于流动的正常算法,用于在隐含模型中推断。 因此, SNPLA 是一种基于模拟的推论方法,只需要从基因模型中进行模拟。 与类似方法相比,SNPLA的主要优点是,我们的方法共同学习了后视和可能性。 SNPLA 完全避免了以类似方法引入的Markov链 Monte Carlo 取样和参数建议功能的校正步骤,但这些功能在数字上可能不稳定或限制性。 在四个实验中,我们显示SNPLA在使用其他方法中使用的相同数量的模型模拟时具有竞争力,尽管由于共同学习后视和概率功能,SNPLA的推论问题更为复杂。