We propose accelerated randomized coordinate descent algorithms for stochastic optimization and online learning. Our algorithms have significantly less per-iteration complexity than the known accelerated gradient algorithms. The proposed algorithms for online learning have better regret performance than the known randomized online coordinate descent algorithms. Furthermore, the proposed algorithms for stochastic optimization exhibit as good convergence rates as the best known randomized coordinate descent algorithms. We also show simulation results to demonstrate performance of the proposed algorithms.
翻译:我们建议加速随机协调运算法,用于随机优化和在线学习。我们的算法与已知的加速梯度算法相比,远不如已知的加速梯度算法那么复杂。提议的在线学习算法比已知的随机在线协调运算法的绩效更遗憾。此外,提议的随机协调运算法,与最已知的随机协调运算法一样,是良好的趋同率。我们还展示模拟结果,以展示拟议算法的性能。