Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are \emph{displacement convex} in measures. Concretely, for Lipschitz displacement convex functions defined on probability over $\mathbb{R}^d$, we prove that $O(1/\epsilon^2)$ particles and $O(d/\epsilon^4)$ computations are sufficient to find the $\epsilon$-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. We demonstrate the application of our results for function approximation with specific neural architectures with two-dimensional inputs.
翻译:粒子梯度下移, 使用粒子来代表概率度量, 并平行测量粒子的梯度下移, 被广泛用于优化概率度量的功能。 本文考虑粒子梯度下移, 并用数量有限的粒子进行优化, 并确立其理论保障, 优化测量值中为 emph{ displacement convex} 的功能。 具体来说, 对于根据 $\ mathbb{R ⁇ d$ 来定义的利普西茨移位矩函数, 我们证明, $O( 1/\ epsilon_ 2) 粒子和 $O( d/\ epsilon_ 4) 的计算足以找到 $\ epslon$- 最佳解决方案 。 我们进一步为优化平滑移矩形函数提供了更好的复杂度。 我们用两维输入的具体神经结构对功能近应用了我们的结果 。