Modern approaches for simulation-based inference rely upon deep learning surrogates to enable approximate inference with computer simulators. In practice, the estimated posteriors' computational faithfulness is, however, rarely guaranteed. For example, Hermans et al. (2021) show that current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences. In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution. We achieve this by enforcing a balancing condition that increases the quantified uncertainty in small simulation budget regimes while still converging to the exact posterior as the budget increases. We provide theoretical arguments showing that BNRE tends to produce posterior surrogates that are more conservative than NRE's. We evaluate BNRE on a wide variety of tasks and show that it produces conservative posterior surrogates on all tested benchmarks and simulation budgets. Finally, we emphasize that BNRE is straightforward to implement over NRE and does not introduce any computational overhead.
翻译:以模拟为基础的现代推论方法依赖于深度学习代用法,以便能够与计算机模拟器进行近似推算。实际上,估计的后继者的计算忠诚性很少得到保证。例如,Hermans等人(2021年)表明,目前基于模拟的推论算法可以产生过于自信的后行者,从而冒出虚假推论的风险。在这项工作中,我们引入了平衡神经比比比对比对比估计(BNRE)更保守的测算法(BNRE),这一算法的变异旨在产生更保守的后近似近似,从而提高其可靠性,同时共享相同的贝耶斯最佳解决方案。我们通过执行一种平衡条件来实现这一点,在小型模拟预算制度中增加量化的不确定性,同时随着预算的增加,仍然与精确的后继者相趋一致。我们提供了理论论据,表明BNRE倾向于产生比NRE更保守的后继试测。我们评估了各种各样的任务,并表明它在所有测试的基准和模拟预算中产生保守的远近代代代用。最后,我们强调,BRE没有直接的计算。