Stein Variational Gradient Descent (SVGD) is a popular sampling algorithm used in various machine learning tasks. It is well known that SVGD arises from a discretization of the kernelized gradient flow of the Kullback-Leibler divergence $D_{KL}\left(\cdot\mid\pi\right)$, where $\pi$ is the target distribution. In this work, we propose to enhance SVGD via the introduction of importance weights, which leads to a new method for which we coin the name $\beta$-SVGD. In the continuous time and infinite particles regime, the time for this flow to converge to the equilibrium distribution $\pi$, quantified by the Stein Fisher information, depends on $\rho_0$ and $\pi$ very weakly. This is very different from the kernelized gradient flow of Kullback-Leibler divergence, whose time complexity depends on $D_{KL}\left(\rho_0\mid\pi\right)$. Under certain assumptions, we provide a descent lemma for the population limit $\beta$-SVGD, which covers the descent lemma for the population limit SVGD when $\beta\to 0$. We also illustrate the advantages of $\beta$-SVGD over SVGD by simple experiments.
翻译:Stein Variational梯度底部( SVGD) 是用于各种机器学习任务的流行抽样算法 。 众所周知, SVGD 源自于Kullback- Leibler 差差的分层梯度流的分解 $DQKKKL ⁇ left (cdot\mid\pi\right) 美元, 其中美元是目标分布。 在这项工作中, 我们提议通过引入重要权重来增强SVGD, 从而产生一种新的方法, 我们为此发明了$Beta$- SVGD。 在持续的时间和无限粒子制度中, 这一流流向平衡分布的趋同时间是$\pi$, 由Stein Fisherish公司的信息量化, 取决于$rho_0和$pipi$。 这与Kellback- Leiber差流的分流非常不同, 它的时间复杂性取决于$DGD_\\\\ mid\ m\ pright$。 根据某些假设, 当SDGDM 的纯位值时, 我们GDDDM 时, 我们的基值为S&S&V的优势限制, $S&BDGDDM.