In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms.
翻译:近年来,以粒子为基础的变异推断(ParVI)方法(如Stein变异梯度下降(SVGD))作为巴伊西亚推论的可缩放方法越来越受欢迎。 不幸的是,这种方法的特性总是取决于超参数(如学习率 ), 实践者必须仔细调整, 以确保以合适的速率与目标衡量标准趋同。 在本文中, 我们引入了一套基于硬币赌注的可缩放巴伊西亚推论新粒子方法, 这些方法完全没有学习率。 我们用一系列数字例子来说明我们的方法表现, 包括几个高维模型和数据集, 显示了与其他ParVI算法的可比较性能 。