Simulation-based inference (SBI) solves statistical inverse problems by repeatedly running a stochastic simulator and inferring posterior distributions from model-simulations. To improve simulation efficiency, several inference methods take a sequential approach and iteratively adapt the proposal distributions from which model simulations are generated. However, many of these sequential methods are difficult to use in practice, both because the resulting optimisation problems can be challenging and efficient diagnostic tools are lacking. To overcome these issues, we present Truncated Sequential Neural Posterior Estimation (TSNPE). TSNPE performs sequential inference with truncated proposals, sidestepping the optimisation issues of alternative approaches. In addition, TSNPE allows to efficiently perform coverage tests that can scale to complex models with many parameters. We demonstrate that TSNPE performs on par with previous methods on established benchmark tasks. We then apply TSNPE to two challenging problems from neuroscience and show that TSNPE can successfully obtain the posterior distributions, whereas previous methods fail. Overall, our results demonstrate that TSNPE is an efficient, accurate, and robust inference method that can scale to challenging scientific models.
翻译:模拟模拟推导法(SBI)通过反复使用模拟模拟模拟模拟器和从模型模拟中推断后部分布,解决统计反的问题。为了提高模拟效率,几种推推方法采取顺序法,并迭接地调整生成模型模拟所依据的建议分布。然而,许多这些相继方法难以在实践中使用,因为由此产生的优化问题可能具有挑战性,而且缺乏高效的诊断工具。为了克服这些问题,我们介绍了连续的序列神经神经波内刺激(TSNPE) 。TSNPE 以转动的建议进行顺序推断,绕过替代方法的优化问题。此外,TRPE 能够高效地进行覆盖测试,以多种参数衡量复杂的模型。我们证明TSNPE 与以前确定的基准任务的方法相比,表现得相当。我们随后将TSNPE应用于神经科学的两个具有挑战性的问题,并表明TSNPE能够成功获得远地点分布,而以前的方法则失败。总体而言,我们的结果可以挑战科学模型的精确度。