Practitioners are often left tuning Metropolis-Hastings algorithms by trial and error or using optimal scaling guidelines to avoid poor empirical performance. We develop lower bounds on the convergence rates of geometrically ergodic accept-reject-based Markov chains (i.e. Metropolis-Hastings, non-reversible Metropolis-Hastings) to study their computational complexity. If the target density concentrates with a parameter $n$ (e.g. Bayesian posterior concentration, Laplace approximations), we show the convergence rate can tend to $1$ exponentially fast if the tuning parameters do not depend carefully on $n$. We show this is the case for random-walk Metropolis in Bayesian logistic regression with Zellner's g-prior when the dimension and sample size $d/n \to \gamma \in (0, 1)$. We focus on more general target densities using a special class of Metropolis-Hastings algorithms with a Gaussian proposal (e.g. random walk and Metropolis-adjusted Langevin algorithms) where we give more general conditions. An application to flat prior Bayesian logistic regression as $n \to \infty$ is studied. We also develop lower bounds in the Wasserstein distances which have become popular in the convergence analysis of high-dimensional MCMC algorithms with similar conclusions.
翻译:开业者往往被留下,通过试验和错误或使用最佳缩放指南调整大都会-哈皮斯算法,以避免不实的经验性表现。我们开发了有关马可夫链(即大都会-哈斯廷、不可逆大都会-哈斯廷等)的几何接受-反弹式马可夫链(即大都会-哈斯廷、不可逆大都会-哈斯廷等)汇合率的下限,以研究其计算复杂性。如果目标密度集中以1美元参数(如巴耶西亚次元级后级集中、拉普尔结论近似),我们显示,如果调调参数不小心依赖美元,趋同汇率的汇合率就会达到1美元指数速度。我们展示了巴伊斯亚斯大都会的随机行走大都会-接受-反弹分数率率(例如大都会-大都市-大都会-大都会)的汇合率率率率率比率,这样我们就能在一般的马萨基内进行更低的校准的校正。