It is of significant interest in many applications to sample from a high-dimensional target distribution $\pi$ with the density $\pi(\text{d} x) \propto e^{-U(x)} (\text{d} x) $, based on the temporal discretization of the Langevin stochastic differential equations (SDEs). In this paper, we propose an explicit projected Langevin Monte Carlo (PLMC) algorithm with non-convex potential $U$ and super-linear gradient of $U$ and investigate the non-asymptotic analysis of its sampling error in total variation distance. Equipped with time-independent regularity estimates for the corresponding Kolmogorov equation, we derive the non-asymptotic bounds on the total variation distance between the target distribution of the Langevin SDEs and the law induced by the PLMC scheme with order $\mathcal{O}(h |\ln h|)$. Moreover, for a given precision $\epsilon$, the smallest number of iterations of the classical Langevin Monte Carlo (LMC) scheme with the non-convex potential $U$ and the globally Lipshitz gradient of $U$ can be guaranteed by order ${\mathcal{O}}\big(\tfrac{d^{3/2}}{\epsilon} \cdot \ln (\tfrac{d}{\epsilon}) \cdot \ln (\tfrac{1}{\epsilon}) \big)$. Numerical experiments are provided to confirm the theoretical findings.
翻译:暂无翻译