We study the Inexact Langevin Algorithm (ILA) for sampling using estimated score function when the target distribution satisfies log-Sobolev inequality (LSI), motivated by Score-based Generative Modeling (SGM). We prove a long-term convergence in Kullback-Leibler (KL) divergence under a sufficient assumption that the error of the score estimator has a bounded Moment Generating Function (MGF). Our assumption is weaker than $L^\infty$ (which is too strong to hold in practice) and stronger than $L^2$ error assumption, which we show not sufficient to guarantee convergence in general. Under the $L^\infty$ error assumption, we additionally prove convergence in R\'enyi divergence, which is stronger than KL divergence. We then study how to get a provably accurate score estimator which satisfies bounded MGF assumption for LSI target distributions, by using an estimator based on kernel density estimation. Together with the convergence results, we yield the first end-to-end convergence guarantee for ILA in the population level. Last, we generalize our convergence analysis to SGM and derive a complexity guarantee in KL divergence for data satisfying LSI under MGF-accurate score estimator.
翻译:当目标分布满足基于分数的生成模型(SGM)激励下,我们用估计分数函数来进行抽样研究Langevin Algorithm(ILA),当目标分布满足日志-Sobolev不平等(LSI)时,我们用估计分数的估算值来进行估计。在基于分数的生成模型(SGM)的激励下,我们证明在Kullback-enyi(KL)差异中,我们长期趋同,充分假定分数的误差具有受约束的移动生成功能(MGF)的误差。我们的假设弱于$Linfty$(在实际中无法维持的太强),而且比2美元的误差假设更强。在基于内核密度估计的估算值下,我们显示不足以保证总体趋同率的假设中,我们进一步证明R\'enyi差(KL)的长期趋同值的趋同值,这比KGFM水平更强。