We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch and accelerated setting. To do so, we propose specific model-based algorithms and an acceleration scheme for which we provide non-asymptotic convergence guarantees, which are order-optimal in all problem-dependent constants and provide linear speedup in minibatch size, while maintaining the desirable robustness traits (e.g. to stepsize) of the aProx family. Additionally, we show improved convergence rates and matching lower bounds identifying new fundamental constants for "interpolation" problems, whose importance in statistical machine learning is growing; this, for example, gives a parallelization strategy for alternating projections. We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide.
翻译:为了做到这一点,我们提出了具体的基于模型的算法和加速计划,为此,我们提供了非无损趋同保证,在所有问题依赖的常数中,这些算法都是秩序最佳的,并且提供了小型批量大小的线性加速,同时保持了Prox家族的可取的稳健性特征(例如逐步化 ) 。此外,我们展示了更好的趋同率和匹配更低的界限,确定了“内插”问题的新基本常数,这些问题在统计机器学习中的重要性正在增加;例如,这为交替预测提供了平行战略。我们用实验试验来证实我们的理论结果,以证明在精确的建模、加速和微型联结方面的成果。