We introduce an ensemble Markov chain Monte Carlo approach to sampling from a probability density with known likelihood. This method upgrades an underlying Markov chain by allowing an ensemble of such chains to interact via a process in which one chain's state is cloned as another's is deleted. This effective teleportation of states can overcome issues of metastability in the underlying chain, as the scheme enjoys rapid mixing once the modes of the target density have been populated. We derive a mean-field limit for the evolution of the ensemble. We analyze the global and local convergence of this mean-field limit, showing asymptotic convergence independent of the spectral gap of the underlying Markov chain, and moreover we interpret the limiting evolution as a gradient flow. We explain how interaction can be applied selectively to a subset of state variables in order to maintain advantage on very high-dimensional problems. Finally we present the application of our methodology to Bayesian hyperparameter estimation for Gaussian process regression.
翻译:我们引入了混合的Markov链条 Monte Carlo 方法,从已知可能性的概率密度中进行取样。 这种方法通过允许一个链条的组合通过一个链条被克隆为另一个链条被删除的过程进行互动来提升一个基点的Markov链条。 这种有效的国家传送可以克服底点链条中的可变性问题,因为一旦目标密度模式被聚在一起,这个办法就会迅速混合。 我们为共点的演进得出了一个平均场限制。 我们分析了这一中位界限的全球和局部趋同,显示了与基点链条的光谱差距无关的零点趋同,此外,我们将限制的演变解释为梯度流。 我们解释如何可以有选择地将互动应用于一组国家变量,以便在非常高的维度问题上保持优势。 最后,我们介绍了我们的方法应用于巴伊西亚高斯进程回归的超参数估算。