The problem of optimally scaling the proposal distribution in a Markov chain Monte Carlo algorithm is critical to the quality of the generated samples. Much work has gone into obtaining such results for various Metropolis-Hastings (MH) algorithms. Recently, acceptance probabilities other than MH are being employed in problems with intractable target distributions. There is little resource available on tuning the Gaussian proposal distributions for this situation. We obtain optimal scaling results for a general class of acceptance functions, which includes Barker's and Lazy-MH. In particular, optimal values for the Barker's algorithm are derived and found to be significantly different from that obtained for the MH algorithm. Our theoretical conclusions are supported by numerical simulations indicating that when the optimal proposal variance is unknown, tuning to the optimal acceptance probability remains an effective strategy.
翻译:在Markov连锁公司Monte Carlo的算法中,最佳地按比例分配建议书的问题对于所生成样本的质量至关重要。在为各种大都会-哈斯廷(MH)算法取得这种结果方面,已经做了大量工作。最近,在难以解决的目标分布问题中,除MH外,其他接受可能性正在被运用。在调整高萨的配方方面,没有多少资源可用于调整这种情况。我们为包括Barker和Lazy-MH在内的一般类接受功能取得最佳的按比例分配结果。特别是, Barker算法的最佳值是衍生出来的,并且发现与MH算法获得的值大不相同。我们的理论结论得到数字模拟的支持,这些模拟表明当最佳建议差异未知时,调整最佳的接受概率仍然是有效的战略。