Sampling a probability distribution with an unknown normalization constant is a fundamental problem in computational science and engineering. This task may be cast as an optimization problem over all probability measures, and an initial distribution can be evolved to the desired minimizer dynamically via gradient flows. Mean-field models, whose law is governed by the gradient flow in the space of probability measures, may also be identified; particle approximations of these mean-field models form the basis of algorithms. The gradient flow approach is also the basis of algorithms for variational inference, in which the optimization is performed over a parameterized family of probability distributions such as Gaussians, and the underlying gradient flow is restricted to the parameterized family. By choosing different energy functionals and metrics for the gradient flow, different algorithms with different convergence properties arise. In this paper, we concentrate on the Kullback-Leibler divergence after showing that, up to scaling, it has the unique property that the gradient flows resulting from this choice of energy do not depend on the normalization constant. For the metrics, we focus on variants of the Fisher-Rao, Wasserstein, and Stein metrics; we introduce the affine invariance property for gradient flows, and their corresponding mean-field models, determine whether a given metric leads to affine invariance, and modify it to make it affine invariant if it does not. We study the resulting gradient flows in both probability density space and Gaussian space. The flow in the Gaussian space may be understood as a Gaussian approximation of the flow. We demonstrate that the Gaussian approximation based on the metric and through moment closure coincide, establish connections between them, and study their long-time convergence properties showing the advantages of affine invariance.
翻译:以未知的正常化常数取样概率分布是计算学和工程学的一个根本问题。 这项任务可能会成为所有概率计量的优化问题, 最初的分布可以通过梯度流演变为理想的最小化。 普通模型, 其法则由概率测量空间的梯度流调节; 这些中位模型的粒子偏差构成算法的基础。 梯度流法也是变量推断法的基础, 其中, 优化是在所有概率分布的参数组( 如高斯人)上实现的, 而基底梯度流则局限于参数组。 通过选择不同的能源功能和指数来调节梯度流, 产生不同的趋同特性。 在本文中, 我们集中关注Kullback- Leiver 模型的偏差, 之后, 要扩大, 从这种选取能源的梯度流并不取决于常数常数。 对于测量数据, 我们侧重于Fecher-Rao、 瓦塞斯坦、 梯度流的变量流, 和 Stestelegal 直径关系, 我们引入了一种渐变的轨关系,, 以其渐变的渐变的渐变的渐变的渐变的渐变的轨法, 。