Evolution strategy (ES) is one of promising classes of algorithms for black-box continuous optimization. Despite its broad successes in applications, theoretical analysis on the speed of its convergence is limited on convex quadratic functions and their monotonic transformation.%theoretically how fast it converges to a optima on convex functions is still vague. In this study, an upper bound and a lower bound of the rate of linear convergence of the (1+1)-ES on locally $L$-strongly convex functions with $U$-Lipschitz continuous gradient are derived as $\exp\left(-\Omega_{d\to\infty}\left(\frac{L}{d\cdot U}\right)\right)$ and $\exp\left(-\frac1d\right)$, respectively. Notably, any prior knowledge on the mathematical properties of the objective function such as Lipschitz constant is not given to the algorithm, whereas the existing analyses of derivative-free optimization algorithms require them.
翻译:进化策略( ES) 是具有前景的黑盒连续优化算法类别之一 。 尽管在应用中取得了广泛的成功, 但其趋同速度的理论分析仅限于 convex 二次函数及其单声波变。% 理论上它与 convex 函数的 Optima 的趋同速度仍然模糊。 在本研究中, 一个上限和一个较低的约束, 即(1+1)- ES 的线性趋同速度, 与美元- Lipschitz 连续梯度的当地 $- 强力调和 $- left(- Omega\\\\\ t\ t\ int\ infty\ left (\\ frac{ d\\\ cdot U ⁇ right\right) 。 和 $\ offtimed\ left (- frac1d\right) 。 值得注意的是, 任何关于目标函数( Lipschitz 常数) 的数学特性的先前知识都没有提供给算法, 而现有的无衍生法优化算算算算算法分析则需要它们。