Among random sampling methods, Markov Chain Monte Carlo algorithms are foremost. Using a combination of analytical and numerical approaches, we study their convergence properties towards the steady state, within a random walk Metropolis scheme. Analysing the relaxation properties of some model algorithms sufficiently simple to enable analytic progress, we show that the deviations from the target steady-state distribution can feature a localization transition as a function of the characteristic length of the attempted jumps defining the random walk. While the iteration of the Monte Carlo algorithm converges to equilibrium for all choices of jump parameters, the localization transition changes drastically the asymptotic shape of the difference between the probability distribution reached after a finite number of steps of the algorithm and the target equilibrium distribution. We argue that the relaxation before and after the localisation transition is respectively limited by diffusion and rejection rates.
翻译:在随机采样方法中,马尔可夫链蒙特卡罗算法是最为突出的。本文使用分析和数值方法相结合,研究了随机漫步Metropolis方案中这些算法朝向稳态的收敛性质。通过分析一些足够简单以使得能够进行分析进展的模型算法的松弛特性,我们展示了随着随机漫步所尝试跳跃的特征长度的变化,偏离目标稳态分布的差异可能会出现局限性转变。虽然蒙特卡罗算法的迭代在所有跳跃参数的选择下都趋向于平衡,但局限性转变会极大地改变随机漫步算法在有限步之后达到的概率分布与目标平衡分布的差异的渐近形态。我们认为,在局限性转变之前和之后的松弛分别受到扩散和拒绝率的限制。