Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the parameters and (2) a vector of pseudo-random draws. We attempt to infer all these inputs conditional on the observations. This is challenging as the resulting posterior can be high dimensional and involve strong dependence. We approximate the posterior using normalizing flows, a flexible parametric family of densities. Training data is generated by likelihood-free importance sampling with a large bandwidth value epsilon, which makes the target similar to the prior. The training data is "distilled" by using it to train an updated normalizing flow. The process is iterated, using the updated flow as the importance sampling proposal, and slowly reducing epsilon so the target becomes closer to the posterior. Unlike most other likelihood-free methods, we avoid the need to reduce data to low dimensional summary statistics, and hence can achieve more accurate results. We illustrate our method in two challenging examples, on queuing and epidemiology.
翻译:无隐形推论涉及对观测数据和模拟模拟模型提供的参数值的推算。模拟器是计算机代码,它包含参数,进行随机计算和模拟输出数据。在这项工作中,我们把模拟器视为一个函数,其投入为(1)参数和(2)伪随机图的矢量。我们试图根据观测结果推断所有这些输入。这是具有挑战性的,因为由此产生的后继体可以是高维,并具有很强的依赖性。我们使用正常流、一个灵活的密度参数组合来接近后继体。培训数据是通过无概率重要性取样生成的,其目标与先前相似。培训数据通过使用它来训练更新的正常流“蒸馏”而“蒸发”。我们用更新的流作为重要取样建议,并缓慢地减少食谱,使目标更接近后继体。与大多数其他没有可能性的方法不同,我们避免了将数据减少到低度摘要统计的低位重要程度,从而可以更精确地说明我们的方法。我们用两种方法来解释。我们的方法可以更精确地说明我们的方法,我们用两个例子来解释。