We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available. Both methods learn a conditional energy-based model (EBM) of the likelihood using synthetic data generated by the simulator, conditioned on parameters drawn from a proposal distribution. The learned likelihood can then be combined with any prior to obtain a posterior estimate, from which samples can be drawn using MCMC. Our methods uniquely combine a flexible Energy-Based Model and the minimization of a KL loss: this is in contrast to other synthetic likelihood methods, which either rely on normalizing flows, or minimize score-based objectives; choices that come with known pitfalls. Our first method, Amortized Unnormalized Neural Likelihood Estimation (AUNLE), introduces a tilting trick during training that allows to significantly lower the computational cost of inference by enabling the use of efficient MCMC techniques. Our second method, Sequential UNLE (SUNLE), employs a robust doubly intractable approach in order to re-use simulation data and improve posterior accuracy on a specific dataset. We demonstrate the properties of both methods on a range of synthetic datasets, and apply them to a neuroscience model of the pyloric network in the crab Cancer Borealis, matching the performance of other synthetic likelihood methods at a fraction of the simulation budget.
翻译:我们引入了两种模拟基于模拟推断的合成可能性方法(SBI ), 以便在具备高纤维模拟模拟器时,从实验性观测中进行摊销或定向推断。 两种方法都学习了使用模拟器产生的合成数据的可能性的有条件的基于能源模型(EBM ), 其条件是从一个建议分布中提取的参数。 所学的可能性随后可以与任何事先获得后方估计相结合, 从而使用 MCMCMC 方法来提取样本。 我们的方法将灵活的基于能源的模型与最大限度地减少KL损失的模拟性能模型的独特结合起来: 这与其他合成可能性方法形成对比, 要么依靠正常流动, 要么尽量减少基于分数的目标; 选择使用已知的陷阱。 我们的第一种方法,即非正常的神经隐性模拟(AUNLE ), 在培训过程中引入了倾斜的伎俩, 以便通过使用高效的MC 技术来大幅降低推断的计算模型成本。 我们的第二种方法, 定序的UNLE(S),, 使用一种坚固的精度方法,, 以精确的精确度方法, 在模拟的模拟模型中, 和精确的模型中,, 在模拟中, 模拟中,,, 将这些数据的模拟中, 两种方法, 和精确的模拟中, 两种方法, 两种方法, 的模拟数据, 以重新展示的精确性方法,,,, 和精确性, 的模拟的模拟数据。