Stochastic variational Bayes algorithms have become very popular in the machine learning literature, particularly in the context of nonparametric Bayesian inference. These algorithms replace the true but intractable posterior distribution with the best (in the sense of Kullback-Leibler divergence) member of a tractable family of distributions, using stochastic gradient algorithms to perform the optimization step. stochastic variational Bayes inference implicitly trades off computational speed for accuracy, but the loss of accuracy is highly model (and even dataset) specific. In this paper we carry out an empirical evaluation of this trade off in the context of stochastic blockmodels, which are a widely used class of probabilistic models for network and relational data. Our experiments indicate that, in the context of stochastic blockmodels, relatively large subsamples are required for these algorithms to find accurate approximations of the posterior, and that even then the quality of the approximations provided by stochastic gradient variational algorithms can be highly variable.
翻译:暂无翻译