We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of $\widetilde{\Omega}(\kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results up to logarithmic factors and answering an open question of Chewi et. al. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.
翻译:我们对两种最受欢迎的采样方法,即大都会调整的朗埃文算法(MALA)和多步汉密尔顿·蒙特卡洛(HMC)的性能进行了较低的限制,这些算法在应用到有良好条件的分布时具有一个跳式集成器。我们的主要结果是,对MALA的混合时间从一个指数性温暖的开始算法结果与对数因素相匹配,并回答了Chewi等人的开放问题。 我们还表明,HMC在任何几个跃式步骤下放松时间都需要多度依赖尺寸,并且通过改变步骤计数将所实现的收益捆绑起来。我们的HMC分析从跳式集成和Chebyshev 聚谷之间的新颖联系中得出,这些联系可能具有独立的兴趣。