This note attempts to revisit the classical results on Laplace approximation in a modern non-asymptotic and dimension free form. Such an extension is motivated by applications to high dimensional statistical and optimization problems. The established results provide explicit non-asymptotic bounds on the quality of a Gaussian approximation of the posterior distribution in total variation distance in terms of the so called \emph{effective dimension} \( p_G \). This value is defined as interplay between information contained in the data and in the prior distribution. In the contrary to prominent Bernstein - von Mises results, the impact of the prior is not negligible and it allows to keep the effective dimension small or moderate even if the true parameter dimension is huge or infinite. We also address the issue of using a Gaussian approximation with inexact parameters with the focus on replacing the Maximum a Posteriori (MAP) value by the posterior mean and design the algorithm of Bayesian optimization based on Laplace iterations. The results are specified to the case of nonlinear inverse problem.
翻译:本说明试图在现代非无线和无维形式下重新审视拉皮尔近似法的经典结果。 这种扩展是由高维统计和优化问题的应用驱动的。 既定结果为高斯近似法的质量提供了明确的非无线界限, 以所谓的“ emph{ 有效维度”\ (p_G\) 的距离来显示后端分布, 以完全变异的方式显示。 这个数值被定义为数据所含信息与先前分布中的信息之间的相互作用。 与突出的 Bernstein- von Misses 结果相反, 先前的结果的影响不容忽略, 即使真正的参数维度是巨大或无限的, 也允许将有效维度保持小或中度。 我们还处理使用高斯近似近似法的参数的问题, 重点是用后端平均值取代后端值, 并设计基于 Laplace 的Bayesian 优化算法 。 其结果被具体描述为非线性问题。