Alternating Direction Method of Multipliers (ADMM) is a popular method for solving large-scale Machine Learning problems. Stochastic ADMM was proposed to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a faster convergence rate, such as SAG-ADMM and SVRG-ADMM. However, their convergence rate is still suboptimal w.r.t the smoothness constant. In this paper, we propose an accelerated stochastic ADMM algorithm with variance reduction, which enjoys a faster convergence than all the existing stochastic ADMM algorithms. We theoretically analyse its convergence rate and show its dependence on the smoothness constant is optimal. We also empirically validate its effectiveness and show its priority over other stochastic ADMM algorithms.
翻译:暂无翻译