A simulation is useful when the phenomenon of interest is either expensive to regenerate or irreproducible with the same context. Recently, Bayesian inference on the distribution of the simulation input parameter has been implemented sequentially to minimize the required simulation budget for the task of simulation validation to the real-world. However, the Bayesian inference is still challenging when the ground-truth posterior is multi-modal with a high-dimensional simulation output. This paper introduces a regularization technique, namely Neural Posterior Regularization (NPR), which enforces the model to explore the input parameter space effectively. Afterward, we provide the closed-form solution of the regularized optimization that enables analyzing the effect of the regularization. We empirically validate that NPR attains the statistically significant gain on benchmark performances for diverse simulation tasks.
翻译:当感兴趣的现象对于再生来说费用昂贵或无法在同一背景下复制时,模拟是有用的。最近,对模拟输入参数分布的贝耶斯推论已经先后实施,以尽量减少模拟验证任务到真实世界所需的模拟预算。然而,当地面真相后继器具有多式和高维模拟输出时,巴耶斯推论仍然具有挑战性。本文引入了一种正规化技术,即Neural Posother Offeralization(NPR),该技术将模型用于有效探索输入参数空间。随后,我们提供了正规化优化的封闭式解决方案,以便能够分析正规化的效果。我们从经验上证实,NPR在多种模拟任务的基准性能上取得了统计上的重大收益。