In this project, we propose a Variational Inference algorithm to approximate posterior distributions. Building on prior methods, we develop the Gradient-Steered Stein Variational Gradient Descent (G-SVGD) approach. This method introduces a novel loss function that combines a weighted gradient and the Evidence Lower Bound (ELBO) to enhance convergence speed and accuracy. The learning rate is determined through a suboptimal minimization of this loss function within a gradient descent framework. The G-SVGD method is compared against the standard Stein Variational Gradient Descent (SVGD) approach, employing the ADAM optimizer for learning rate adaptation, as well as the Markov Chain Monte Carlo (MCMC) method. We assess performance in two wave prospection models representing low-contrast and high-contrast subsurface scenarios. To achieve robust numerical approximations in the forward model solver, a five-point operator is employed, while the adjoint method improves accuracy in computing gradients of the log posterior. Our findings demonstrate that G-SVGD accelerates convergence and offers improved performance in scenarios where gradient evaluation is computationally expensive. The abstract highlights the algorithm's applicability to wave prospection models and its potential for broader applications in Bayesian inference. Finally, we discuss the benefits and limitations of G-SVGD, emphasizing its contribution to advancing computational efficiency in uncertainty quantification.
翻译:暂无翻译