We present an unbiased method for Bayesian posterior means based on kinetic Langevin dynamics that combines advanced splitting methods with enhanced gradient approximations. Our approach avoids Metropolis correction by coupling Markov chains at different discretization levels in a multilevel Monte Carlo approach. Theoretical analysis demonstrates that our proposed estimator is unbiased, attains finite variance, and satisfies a central limit theorem. It can achieve accuracy $ε>0$ for estimating expectations of Lipschitz functions in $d$ dimensions with $\mathcal{O}(d^{1/4}ε^{-2})$ expected gradient evaluations, without assuming warm start. We exhibit similar bounds using both approximate and stochastic gradients, and our method's computational cost is shown to scale independently of the size of the dataset. The proposed method is tested using a multinomial regression problem on the MNIST dataset and a Poisson regression model for soccer scores. Experiments indicate that the number of gradient evaluations per effective sample is independent of dimension, even when using inexact gradients. For product distributions, we give dimension-independent variance bounds. Our results demonstrate that in large-scale applications, the unbiased algorithm we present can be 2-3 orders of magnitude more efficient than the ``gold-standard" randomized Hamiltonian Monte Carlo.
翻译:我们提出了一种基于动力学朗之万过程的无偏贝叶斯后验均值估计方法,该方法将先进的分裂算法与增强的梯度近似技术相结合。通过在多级蒙特卡洛框架中耦合不同离散化水平的马尔可夫链,我们的方法避免了Metropolis校正步骤。理论分析表明,所提出的估计量具有无偏性、有限方差性,并满足中心极限定理。对于d维Lipschitz函数期望值的估计,在精度要求ε>0时,该方法可实现𝒪(d^{1/4}ε^{-2})的期望梯度计算量,且无需假设热启动条件。我们展示了使用近似梯度和随机梯度时的类似收敛界,并证明该方法的计算成本与数据集规模无关。通过在MNIST数据集上的多项逻辑回归问题及足球比分泊松回归模型中的测试,实验表明即使使用不精确梯度,每个有效样本所需的梯度计算量仍与维度无关。对于乘积分布,我们给出了与维度无关的方差界。实验结果表明,在大规模应用中,本文提出的无偏算法比“金标准”随机哈密顿蒙特卡洛方法的计算效率高出2-3个数量级。