We consider stochastic approximations of sampling algorithms, such as Stochastic Gradient Langevin Dynamics (SGLD) and the Random Batch Method (RBM) for Interacting Particle Dynamcs (IPD). We observe that the noise introduced by the stochastic approximation is nearly Gaussian due to the Central Limit Theorem (CLT) while the driving Brownian motion is exactly Gaussian. We harness this structure to absorb the stochastic approximation error inside the diffusion process, and obtain improved convergence guarantees for these algorithms. For SGLD, we prove the first stable convergence rate in KL divergence without requiring uniform warm start, assuming the target density satisfies a Log-Sobolev Inequality. Our result implies superior first-order oracle complexity compared to prior works, under significantly milder assumptions. We also prove the first guarantees for SGLD under even weaker conditions such as H\"{o}lder smoothness and Poincare Inequality, thus bridging the gap between the state-of-the-art guarantees for LMC and SGLD. Our analysis motivates a new algorithm called covariance correction, which corrects for the additional noise introduced by the stochastic approximation by rescaling the strength of the diffusion. Finally, we apply our techniques to analyze RBM, and significantly improve upon the guarantees in prior works (such as removing exponential dependence on horizon), under minimal assumptions.
翻译:我们考虑采样算法(如Stochastic Gradedient Langevin Dynamics (SGLD) 和 Interacting Particle Dynamic (IPD) 的随机批发方法(RBM) 等抽样算法的近似近似相。 我们观察到,由于中央限制理论(CLT),由随机近似近似近似近似近似近似高萨氏近似近似近似近似近似近似近似近似,而驱动布朗恩运动的驱动力恰恰是高高山运动(Gaussian ) 。我们利用这一结构吸收扩散过程中的随机近似近似误差,并为这些算取得更好的趋同保证。 对于SGLD来说,我们证明KLL差异的首次稳定趋同率趋同率的趋同率方法,假定目标密度满足了日志不平等。 我们的分析结果显示,与先前的工程相比,第一阶或更轻的高度复杂性复杂程度, 也证明了SGLM和GLDD(我们不断修正) 的精确的推算方法,最后要求我们更精确地改进了前的推算。