Recent works have shown that stochastic gradient descent (SGD) achieves the fast convergence rates of full-batch gradient descent for over-parameterized models satisfying certain interpolation conditions. However, the step-size used in these works depends on unknown quantities and SGD's practical performance heavily relies on the choice of this step-size. We propose to use line-search techniques to automatically set the step-size when training models that can interpolate the data. In the interpolation setting, we prove that SGD with a stochastic variant of the classic Armijo line-search attains the deterministic convergence rates for both convex and strongly-convex functions. Under additional assumptions, SGD with Armijo line-search is shown to achieve fast convergence for non-convex functions. Furthermore, we show that stochastic extra-gradient with a Lipschitz line-search attains linear convergence for an important class of non-convex functions and saddle-point problems satisfying interpolation. To improve the proposed methods' practical performance, we give heuristics to use larger step-sizes and acceleration. We compare the proposed algorithms against numerous optimization methods on standard classification tasks using both kernel methods and deep networks. The proposed methods result in competitive performance across all models and datasets, while being robust to the precise choices of hyper-parameters. For multi-class classification using deep networks, SGD with Armijo line-search results in both faster convergence and better generalization.
翻译:最近的工作表明,随机梯度梯度下降(SGD)在超参数化模型中达到超参数性梯度梯度下降快速趋同率,满足了某些内插条件;然而,这些工程使用的分级规模取决于未知数量,而SGD的实际性能在很大程度上取决于这一分级大小的选择。我们提议在能够对数据进行内插的培训模型中,使用线性研究技术自动设定分级规模。在内插环境中,我们证明具有经典Armijo线研究的分级变量的SGD在 convex和强convex功能中都达到了确定性线性趋同率。在额外的假设下,SGD与Armijo线性研究的分级规模取决于未知数量,而SGD的实际性规模则显示,在非Convex函数上,SGD的分级化速度要迅速趋同。此外,我们提议,在非Clipschitzt线研究中,在重要的非convevelx函数和马达点点上,我们建议采用更精确的分类方法,同时使用更精确的Smarlicalal-assalalal 和加速的计算方法。