In the current work we present two generalizations of the Parallel Tempering algorithm, inspired by the so-called continuous-time Infinite Swapping algorithm. Such a method, found its origins in the molecular dynamics community, and can be understood as the limit case of the continuous-time Parallel Tempering algorithm, where the (random) time between swaps of states between two parallel chains goes to zero. Thus, swapping states between chains occurs continuously. In the current work, we extend this idea to the context of time-discrete Markov chains and present two Markov chain Monte Carlo algorithms that follow the same paradigm as the continuous-time infinite swapping procedure. We analyze the convergence properties of such discrete-time algorithms in terms of their spectral gap, and implement them to sample from different target distributions. Numerical results show that the proposed methods significantly improve over more traditional sampling algorithms such as Random Walk Metropolis and (traditional) Parallel Tempering.
翻译:在目前的工作中,我们提出了由所谓的连续时间无穷互换算法所启发的平行诱惑算法的两个概括性。这样一种方法,其起源于分子动态群,可以理解为连续时间平行诱惑算法的极限,即两个平行链条之间国家互换之间的(随机)时间变为零。因此,链条之间的互换是连续不断发生的。在目前的工作中,我们把这个想法扩大到时间分解的马尔科夫链条,并提出了与连续时间无限互换程序相同的两个Markov链条Monte Carlo算法。我们分析了这种离散时间算法在光谱差距方面的趋同性,并将之用于从不同目标分布中取样。数字结果显示,拟议的方法大大改进了更传统的抽样算法,如随机行走大都会和(传统)平行调试法。