Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior. Recently, multiple methods have been proposed to run SGD with biased gradient estimates obtained from a Markov chain. This paper provides the first non-asymptotic convergence analysis of these methods by establishing their mixing rate and gradient variance. To do this, we demonstrate that these methods-which we collectively refer to as Markov chain score ascent (MCSA) methods-can be cast as special cases of the Markov chain gradient descent framework. Furthermore, by leveraging this new understanding, we develop a novel MCSA scheme, parallel MCSA (pMCSA), that achieves a tighter bound on the gradient variance. We demonstrate that this improved theoretical result translates to superior empirical performance.
翻译:由于梯度被界定为附后体的有机体,因此尽量减少与随机梯度下坡(SGD)之间的包容性差异具有挑战性。最近,有人提议采用多种方法,利用从Markov链条获得的偏差梯度估计来运行SGD。本文通过确定混合率和梯度差异,对这些方法进行了第一次非非同步的趋同分析。为此,我们证明这些方法----我们统称为Markov链分为Crent(MCSA)的方法----可以作为Markov链条梯度下坡框架的特殊例子。此外,我们利用这一新理解,开发了一个新的MCSA计划,即平行的MCSA(MCSA),在梯度差异上实现更紧密的结合。我们证明,这一改进的理论结果可以转化为优异的经验性表现。