We introduced the Hug and Hop Markov chain Monte Carlo algorithm for estimating expectations with respect to an intractable distribution. The algorithm alternates between two kernels: Hug and Hop. Hug is a non-reversible kernel that repeatedly applies the bounce mechanism from the recently proposed Bouncy Particle Sampler to produce a proposal point far from the current position, yet on almost the same contour of the target density, leading to a high acceptance probability. Hug is complemented by Hop, which deliberately proposes jumps between contours and has an efficiency that degrades very slowly with increasing dimension. There are many parallels between Hug and Hamiltonian Monte Carlo using a leapfrog integrator, including the order of the integration scheme, however Hug is also able to make use of local Hessian information without requiring implicit numerical integration steps, and its performance is not terminally affected by unbounded gradients of the log-posterior. We test Hug and Hop empirically on a variety of toy targets and real statistical models and find that it can, and often does, outperform Hamiltonian Monte Carlo.
翻译:我们引入了Hug和Hop Markov连锁 Monte Carlo 算法,以估计对难以控制的分布的预期值。 算法在两个内核(Hug和Hop)之间的替代值。 Hug是一个不可逆的内核,它反复应用最近提议的Bouncy粒子采样器的反弹机制,以产生一个远离当前位置的建议点,但以目标密度的几乎相同的轮廓,导致很高的接受概率。 由Hop补充了Hop, 它故意提议在轮廓之间跳跃,效率随着尺寸的增加而缓慢下降。 Hug和Hamiltonian Monte Carlo 之间有许多平行点, 使用一个跳式集成器, 包括整合计划的顺序, 但是Husian 本地信息也可以在不需要隐含数字的整合步骤的情况下使用, 其性能不会受到日志库中未加边梯度的终期影响。 我们测试Hug 并按经验, 测试了各种玩具目标和真实的统计模型, 发现它能够并经常超越汉密尔密尔顿·蒙特卡洛 。