Sampling a probability distribution with an unknown normalization constant is a fundamental problem in computational science and engineering. This task may be cast as an optimization problem over all probability measures, and an initial distribution can be evolved to the desired minimizer dynamically via gradient flows. Mean-field models, whose law is governed by the gradient flow in the space of probability measures, may also be identified; particle approximations of these mean-field models form the basis of algorithms. The gradient flow approach is also the basis of algorithms for variational inference, in which the optimization is performed over a parameterized family of probability distributions such as Gaussians, and the underlying gradient flow is restricted to the parameterized family. By choosing different energy functionals and metrics for the gradient flow, different algorithms with different convergence properties arise. In this paper, we concentrate on the Kullback-Leibler divergence after showing that, up to scaling, it has the unique property that the gradient flows resulting from this choice of energy do not depend on the normalization constant. For the metrics, we focus on variants of the Fisher-Rao, Wasserstein, and Stein metrics; we introduce the affine invariance property for gradient flows, and their corresponding mean-field models, determine whether a given metric leads to affine invariance, and modify it to make it affine invariant if it does not. We study the resulting gradient flows in both probability density space and Gaussian space. The flow in the Gaussian space may be understood as a Gaussian approximation of the flow. We demonstrate that the Gaussian approximation based on the metric and through moment closure coincide, establish connections between them, and study their long-time convergence properties showing the advantages of affine invariance.
翻译:在计算科学和工程中,从未知的归一化常数中抽样概率分布是一个基本问题。这个问题可以被看作是在所有概率分布上的优化问题,并且通过梯度流动态地将初始分布演化到所需的最小值。由梯度流管辖的平均场模型也可以被确定。这些平均场模型的粒子近似是算法的基础。梯度流方法也是变分推理算法的基础,其中优化是在一个参数化的概率分布族,如高斯分布之上进行的,并且梯度流受限于参数化族。通过选择不同的能量泛函和梯度流度量,产生了不同收敛特性的不同算法。在本文中,我们专注于构造Kullback-Leibler散度,在证明它具有梯度流动中不依赖于归一化常数的独特性质之后。对于梯度流度量,我们集中于Fisher-Rao、Wasserstein和Stein度量的变体;我们引入了梯度流动的仿射不变性,以及它们相应的平均场模型。我们确定了一个给定的度量是否导致仿射不变性。如果它不具备仿射不变性,我们将对其进行修改以使其仿射不变。我们研究了在概率密度空间和高斯空间中的结果梯度流动。高斯空间中的流动可以理解为度量的高斯近似。我们证明了基于度量的高斯近似和基于矩的高斯近似之间的连接,并研究了它们的长时间收敛特性,表明了仿射不变性的优势。