Owing to the recent advances in "Big Data" modeling and prediction tasks, variational Bayesian estimation has gained popularity due to their ability to provide exact solutions to approximate posteriors. One key technique for approximate inference is stochastic variational inference (SVI). SVI poses variational inference as a stochastic optimization problem and solves it iteratively using noisy gradient estimates. It aims to handle massive data for predictive and classification tasks by applying complex Bayesian models that have observed as well as latent variables. This paper aims to decentralize it allowing parallel computation, secure learning and robustness benefits. We use Alternating Direction Method of Multipliers in a top-down setting to develop a distributed SVI algorithm such that independent learners running inference algorithms only require sharing the estimated model parameters instead of their private datasets. Our work extends the distributed SVI-ADMM algorithm that we first propose, to an ADMM-based networked SVI algorithm in which not only are the learners working distributively but they share information according to rules of a graph by which they form a network. This kind of work lies under the umbrella of `deep learning over networks' and we verify our algorithm for a topic-modeling problem for corpus of Wikipedia articles. We illustrate the results on latent Dirichlet allocation (LDA) topic model in large document classification, compare performance with the centralized algorithm, and use numerical experiments to corroborate the analytical results.
翻译:由于最近在“大数据”建模和预测任务方面的进展,变异贝叶斯估计由于能够提供精确的近亲相近的解决方案而越来越受欢迎。近似推导的一个关键技术是随机变异推法(SVI)。SVI作为一种随机优化问题,提出了变异推论,并用杂音梯度估计来反复解决它。它的目的是通过应用观察到的复杂的巴伊西亚模型和潜在变量,处理预测和分类任务的大量数据。本文旨在分散数据,允许平行计算、确保学习和稳健效益。我们在自上而下的设置中使用调相调控的多利器方向法来开发一个分布式SVI算法,这样独立学习者进行变异算法只要求共享估计模型参数而不是其私人数据集。我们的工作将我们最初提议的已分布式的SVI-ADMM算法扩展为基于ADMM的网络化精度SVI算法,其中不仅包括进行分解的学习者,而且他们共享信息,并且根据他们构成一个深度网络的图表规则来共享信息。我们用一个核心的轨迹分析网络,我们用来校正的模型来校正的LDALDA 的模型,我们的工作就是在学习大的图的图的模型的模型上,我们学习的模型的模型的模型的模型。我们用来用来解释了一个研究。