We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA is a normalizing flows-based algorithm for inference in implicit models, and therefore is a simulation-based inference method that only requires simulations from a generative model. SNPLA avoids Markov chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. By utilizing the reverse KL divergence, SNPLA manages to learn both the likelihood and the posterior in a sequential manner. Over four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even though the inference problem for SNPLA is more complex due to the joint learning of posterior and likelihood function. Due to utilizing normalizing flows SNPLA generates posterior draws much faster (4 orders of magnitude) than MCMC-based methods.
翻译:我们引入了连续神经后向和概率近似(SNPLA)算法。 SNPLA是一种基于流动的正常算法,用于在隐含模型中推断,因此是一种只要求从基因模型中模拟的基于模拟的推论方法。 SNPLA避免了Markov链 Monte Carlo的取样和以类似方法引入的参数建议功能的校正步骤,这些功能在数字上可能是不稳定或限制性的。 SNPLA通过使用反向的 KL 差异,设法以相继方式了解可能性和后向值。 在四个实验中,我们证明SNPLA在使用与其他方法中使用的相同数量的模型模拟时具有竞争力,尽管由于对后向和概率功能的共同学习,SNPLA的推论问题更为复杂。由于使用正常流,SNPLA 生成的后向量(4级)比MC方法快得多。