We consider the problem of designing uniformly stable first-order optimization algorithms for empirical risk minimization. Uniform stability is often used to obtain generalization error bounds for optimization algorithms, and we are interested in a general approach to achieve it. For Euclidean geometry, we suggest a black-box conversion which given a smooth optimization algorithm, produces a uniformly stable version of the algorithm while maintaining its convergence rate up to logarithmic factors. Using this reduction we obtain a (nearly) optimal algorithm for smooth optimization with convergence rate $\widetilde{O}(1/T^2)$ and uniform stability $O(T^2/n)$, resolving an open problem of Chen et al. (2018); Attia and Koren (2021). For more general geometries, we develop a variant of Mirror Descent for smooth optimization with convergence rate $\widetilde{O}(1/T)$ and uniform stability $O(T/n)$, leaving open the question of devising a general conversion method as in the Euclidean case.
翻译:我们考虑了设计统一稳定的第一阶优化算法以尽量减少实验风险的问题。统一稳定常常被用来获得优化算法的通用误差,我们感兴趣的是达到这一差错的一般方法。对于欧洲大陆的几何学,我们建议采用黑箱转换法,提供平稳优化算法,产生统一稳定的算法版本,同时将其趋同率维持在对数系数之下。利用这一减法,我们获得了一种(近乎)最佳的顺差算法,以美元(全局{O}(1/T)2美元)和统一稳定值(O)(T2/n),解决Chen等人(2018年);Attia和Koren(2021年)的开放问题。对于更普遍的几何学,我们开发了一种“闪光源”变式,以美元(全局{O}(1/T)美元和统一稳定值$(O/n)来顺利优化。我们获得一种(近于Uclidean)的趋同率和统一稳定值(T/n)美元来顺利优化,从而解决一般转换方法的难题。