Online optimization has gained increasing interest due to its capability of tracking real-world streaming data. Although online optimization methods have been widely studied in the setting of frequentist statistics, few works have considered online optimization with the Bayesian sampling problem. In this paper, we study an Online Particle-based Variational Inference (OPVI) algorithm that uses a set of particles to represent the approximating distribution. To reduce the gradient error caused by the use of stochastic approximation, we include a sublinear increasing batch-size method to reduce the variance. To track the performance of the OPVI algorithm with respect to a sequence of dynamically changing target posterior, we provide a detailed theoretical analysis from the perspective of Wasserstein gradient flow with a dynamic regret. Synthetic and Bayesian Neural Network experiments show that the proposed algorithm achieves better results than naively applying existing Bayesian sampling methods in the online setting.
翻译:在线优化因其跟踪真实世界流数据的能力而越来越引起人们的兴趣。虽然在设定常客统计时对在线优化方法进行了广泛研究,但很少有作品考虑对巴伊西亚取样问题进行在线优化。在本文中,我们研究了在线粒子变异推论(OPVI)算法,该算法使用一组粒子来代表近似分布。为了减少使用随机近似造成的梯度误差,我们增加了一个亚线性递增批量法,以减少差异。为了跟踪OPVI算法在动态变化目标远地点序列方面的性能,我们从瓦塞斯坦梯度流的角度以动态的遗憾提供了详细的理论分析。合成和巴伊斯神经网络实验显示,拟议的算法比在网上环境中天真地应用已有的巴伊斯取样法取得更好的结果。</s>