Discretization of continuous-time diffusion processes is a widely recognized method for sampling. However, the canonical Euler Maruyama discretization of the Langevin diffusion process, referred as Unadjusted Langevin Algorithm (ULA), studied mostly in the context of smooth (gradient Lipschitz) and strongly log-concave densities, is a considerable hindrance for its deployment in many sciences, including statistics and machine learning. In this paper, we establish several theoretical contributions to the literature on such sampling methods for non-convex distributions. Particularly, we introduce a new mixture weakly smooth condition, under which we prove that ULA will converge with additional log-Sobolev inequality. We also show that ULA for smoothing potential will converge in $L_{2}$-Wasserstein distance. Moreover, using convexification of nonconvex domain \citep{ma2019sampling} in combination with regularization, we establish the convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach $\epsilon$-neighborhood of a target distribution in only polynomial dependence on the dimension. We relax the conditions of \citep{vempala2019rapid} and prove convergence guarantees under isoperimetry, and non-strongly convex at infinity.
翻译:连续时间扩散过程的分解是广泛公认的取样方法。然而,主要在平滑(利普西茨平坦)和极强的日志混凝土密度背景下研究的Langevin扩散过程(ULA),主要是在平滑(利普西茨平坦)和极强的日志混凝土密度背景下研究的Culonic Euler Maruyama(ULA),是在许多科学中部署这种连续时间扩散过程的巨大障碍,包括统计和机器学习。在本文中,我们为非康威分布的这种抽样方法文献作出了一些理论贡献。特别是,我们引入了一种新的混和不平稳状态状态,在这种状态下,我们证明ULA(ULA)会与更多正对日志-Sobolevlevlev(ULA)相融合。我们还表明,光滑动潜力的ULA(ULA)将聚集在$L%19的距离上。此外,利用非convex域域域域域域域域的混凝固度结合,我们在Krequep-limpalalalimalalimalimalimalislation degilal degyal degilatemental degilatement degildal degild)中,我们在Webilatemental degilate 20 pral degilatementmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmentmental 20 pril prilatemental pril 20 prev pril press pril press press press press press press press press press press 和正正正正正正正正正正正正正正正正的平面条件上建立了下,我们建立 20。