In this work we explore a new framework for approximate Bayesian inference in large datasets based on stochastic control. We advocate stochastic control as a finite time and low variance alternative to popular steady-state methods such as stochastic gradient Langevin dynamics (SGLD). Furthermore, we discuss and adapt the existing theoretical guarantees of this framework and establish connections to already existing VI routines in SDE-based models.
翻译:在这项工作中,我们探索了一种基于随机控制的大型数据集中近似贝叶斯式推论的新框架。我们主张将随机控制作为一种有限的时间和低差异的替代方法,以取代流行的稳态方法,如随机梯度朗埃文动态(SGLD ) 。 此外,我们讨论并调整了这一框架的现有理论保障,并建立了与基于SDE模式的现有六类常规的连接。