In this paper, we provide non-asymptotic upper bounds on the error of sampling from a target density using three schemes of discretized Langevin diffusions. The first scheme is the Langevin Monte Carlo (LMC) algorithm, the Euler discretization of the Langevin diffusion. The second and the third schemes are, respectively, the kinetic Langevin Monte Carlo (KLMC) for differentiable potentials and the kinetic Langevin Monte Carlo for twice-differentiable potentials (KLMC2). The main focus is on the target densities that are smooth and log-concave on $\mathbb R^p$, but not necessarily strongly log-concave. Bounds on the computational complexity are obtained under two types of smoothness assumption: the potential has a Lipschitz-continuous gradient and the potential has a Lipschitz-continuous Hessian matrix. The error of sampling is measured by Wasserstein-$q$ distances. We advocate for the use of a new dimension-adapted scaling in the definition of the computational complexity, when Wasserstein-$q$ distances are considered. The obtained results show that the number of iterations to achieve a scaled-error smaller than a prescribed value depends only polynomially in the dimension.
翻译:在本文中,我们利用三种分散的朗埃文扩散方案,对目标密度的取样错误提供非方便的上限。第一个方案是Langevin Monte Carlo(LMC)算法,即Langevin扩散的Euler分解。第二和第三个方案分别是不同潜力的动能Langevin Monte Carlo(KLMC)和流动的Langevin Monte Carlo(KLMC2),以两次差异潜力衡量。主要重点是在$\mathbb R ⁇ p$上平滑的和对co-cocave的目标密度,但不一定是强烈的log-concave。计算复杂性的ounds是在两种光滑的假设下获得的:潜力是利普切茨-连续的梯度,潜力是利普切茨-连续的Hesian矩阵。抽样的错误由瓦塞斯坦-q$的距离测量。我们主张在计算结果定义中使用新的尺寸调整的缩放度缩度,但不一定是逻辑-colestinal-deal-tragyal a development flity flation asislation asislup asislation asislation asislation asislation asislation aslup asislation a asislation a asisl astial a as astimedaldaldaldald. asrsteluplupluplupdaldaldaldaldddddaldaldaldaldaldaldalddd. lexistialdaldald.我们主张使用一个小的算算算算算算算算算算算算算算算算算算算算算算到一个数字的数值,只有小到一个数字,只有小的数值,只有小的数值,只有小到一个小的缩缩算算到一个小的数值,只有小的缩缩数,只有小的缩算算数。