Sharpness is an almost generic assumption in continuous optimization that bounds the distance from minima by objective function suboptimality. It leads to the acceleration of first-order methods via restarts. However, sharpness involves problem-specific constants that are typically unknown, and previous restart schemes reduce convergence rates. Moreover, such schemes are challenging to apply in the presence of noise or approximate model classes (e.g., in compressive imaging or learning problems), and typically assume that the first-order method used produces feasible iterates. We consider the assumption of approximate sharpness, a generalization of sharpness that incorporates an unknown constant perturbation to the objective function error. This constant offers greater robustness (e.g., with respect to noise or relaxation of model classes) for finding approximate minimizers. By employing a new type of search over the unknown constants, we design a restart scheme that applies to general first-order methods and does not require the first-order method to produce feasible iterates. Our scheme maintains the same convergence rate as when assuming knowledge of the constants. The rates of convergence we obtain for various first-order methods either match the optimal rates or improve on previously established rates for a wide range of problems. We showcase our restart scheme on several examples and point to future applications and developments of our framework and theory.
翻译:锐化是连续优化的一个几乎通用的假设,它通过客观功能的不优化将距离与微小相距相隔,它导致通过重新启动加速第一阶方法。然而,锐化涉及通常未知的问题性常数,而以前的重新启动计划降低了趋同率。此外,在噪音或大致模式类别(例如压缩成像或学习问题)出现时,这种计划很难适用,通常假定所使用的第一阶方法会产生可行的迭代。我们认为,大概的敏锐性,即敏锐性的一般化,包含对客观函数错误的未知常态扰动。这种常态为寻找近似最小化者提供了更大的稳健性(例如,关于模类的噪音或放松)。通过对未知的常数或近似模式类别进行新的搜索,我们设计了一种适用于一般一阶方法,而不需要第一阶方法产生可行的迭代。我们的计划与假设常态知识时保持相同的趋同率。我们为各种第一阶方法获得的趋同率,其中含有对客观函数错误的不为未知的不断扰动性。这种常态提供了更强的准性(例如,例如:关于模型的噪音的噪音或先导性模型的升级率与我们以前的模型的模型比重率比重。