Among random sampling methods, Markov Chain Monte Carlo algorithms are foremost. Using a combination of analytical and numerical approaches, we study their convergence properties towards the steady state, within a random walk Metropolis scheme. We show that the deviations from the target steady-state distribution feature a localization transition as a function of the characteristic length of the attempted jumps defining the random walk. This transition changes drastically the error which is introduced by incomplete convergence, and discriminates two regimes where the relaxation mechanism is limited respectively by diffusion and by rejection.
翻译:在随机抽样方法中,马可夫链子蒙特卡洛算法最为突出。我们结合分析方法和数字方法,在随机步行大都会计划内研究它们向稳定状态的趋同特性。我们显示,与目标稳定状态分布的偏差具有本地化过渡特征,这是由试图跳跃的典型长度决定随机行走的特性函数。这种过渡极大地改变了不完全趋同所带来的错误,并区分了两种制度,即分别因扩散和拒绝而限制放松机制的两种制度。