Hamiltonian Monte Carlo (HMC) is a premier Markov Chain Monte Carlo (MCMC) algorithm for continuous target distributions. Its full potential can only be unleashed when its problem-dependent hyperparameters are tuned well. The adaptation of one such hyperparameter, trajectory length ($\tau$), has been closely examined by many research programs with the No-U-Turn Sampler (NUTS) coming out as the preferred method in 2011. A decade later, the evolving hardware profile has lead to the proliferation of personal and cloud based SIMD hardware in the form of Graphics and Tensor Processing Units (GPUs, TPUs) which are hostile to certain algorithmic details of NUTS. This has opened up a hole in the MCMC toolkit for an algorithm that can learn $\tau$ while maintaining good hardware utilization. In this work we build on recent advances along this direction and introduce SNAPER-HMC, a SIMD-accelerator-friendly adaptive-MCMC scheme for learning $\tau$. The algorithm maximizes an upper bound on per-gradient effective sample size along an estimated principal component. We empirically show that SNAPER-HMC is stable when combined with mass-matrix adaptation, and is tolerant of certain pathological target distribution covariance spectra while providing excellent long and short run sampling efficiency. We provide a complete implementation for continuous multi-chain adaptive HMC combining trajectory learning with standard step-size and mass-matrix adaptation in one turnkey inference package.
翻译:汉密尔顿·蒙特卡洛(HMC)是连续目标分布的首要马可夫链链-蒙特卡洛(MCMC)算法,其全部潜力只有在它的问题依赖超光度计调整得当时才能释放出来。许多研究方案都仔细检查了这样一个超参数(轨道长度)($tau$)的调整情况,2011年推出的无U-Turn采样器(NUTS)作为首选方法。在这项工作中,我们利用最近的进展,引入了SNAPER-HMC(SIMD-Accerity-Accerity-MMC)计划,学习$\taual$。该算法以图和Tensor处理器(GPU、TPUs)的形式使个人和云基硬件扩散,而后者对NUSTSMS具有一定的短期有效采样尺寸(GPUS)具有敌意。这为MISMSA提供了一种长期的升级和平稳的升级的模型,我们用SNASBSBS(SB) 提供了一种持续和连续的升级的路径,同时提供某种不断的升级的升级的模型,我们实验性的模型和连续的同步的同步的模型,我们向导路路路路段提供某种稳定的和连续的同步的同步的同步的同步的同步的同步的模型, 。