We introduced the Hug and Hop Markov chain Monte Carlo algorithm for estimating expectations with respect to an intractable distribution. The algorithm alternates between two kernels: Hug and Hop. Hug is a non-reversible kernel that repeatedly applies the bounce mechanism from the recently proposed Bouncy Particle Sampler to produce a proposal point far from the current position, yet on almost the same contour of the target density, leading to a high acceptance probability. Hug is complemented by Hop, which deliberately proposes jumps between contours and has an efficiency that degrades very slowly with increasing dimension. There are many parallels between Hug and Hamiltonian Monte Carlo (HMC) using a leapfrog integrator, including the order of the integration scheme, however Hug is also able to make use of local Hessian information without requiring implicit numerical integration steps, and its performance is not terminally affected by unbounded gradients of the log-posterior. We test Hug and Hop empirically on a variety of toy targets and real statistical models and find that it can, and often does, outperform HMC.
翻译:我们引入了Hug 和 Hopp Markov 连锁 Monte Carlo 算法,以估计对难以控制的分布的预期值。两个内核之间的算法替代:Hug 和 Hopp。 Hug 是一个不可逆的内核,它反复应用最近提议的Bouncy粒子采样器的反弹机制,以产生远离当前位置的推荐点,但几乎与目标密度的大致相同,导致高接受概率。 Hug 得到Hop的补充,它故意提议在轮廓之间跳跃,效率随着尺寸的增加而缓慢下降。 Hug 和 Hamiltonian Monte Carlo (HMC) 之间有许多平行点, 使用跳式集成器, 包括集成器的顺序, 但是Hug 还能在不要求隐含数字整合步骤的情况下利用当地的赫森信息, 而其性能并没有受到逻辑- 储层未划定的梯度的终极影响。 我们从实验角度测试Hug 和 Hug, 并发现它能够并且经常超越HMC 。