In many iterative optimization methods, fixed-point theory enables the analysis of the convergence rate via the contraction factor associated with the linear approximation of the fixed-point operator. While this factor characterizes the asymptotic linear rate of convergence, it does not explain the non-linear behavior of these algorithms in the non-asymptotic regime. In this letter, we take into account the effect of the first-order approximation error and present a closed-form bound on the convergence in terms of the number of iterations required for the distance between the iterate and the limit point to reach an arbitrarily small fraction of the initial distance. Our bound includes two terms: one corresponds to the number of iterations required for the linearized version of the fixed-point operator and the other corresponds to the overhead associated with the approximation error. With a focus on the convergence in the scalar case, the tightness of the proposed bound is proven for positively quadratic first-order difference equations.
翻译:在许多迭代优化方法中,固定点理论有助于通过与固定点操作员的线性近似相联的收缩系数来分析趋同率。虽然这一系数是无症状线性趋同率的特征,但它没有解释这些算法在非平稳机制中的非线性行为。在本信中,我们考虑到第一阶近似误差的影响,并提出了封闭式组合,将迭代与达到最初距离中任意小部分的极限点之间所需的迭接率的趋同数联系起来。我们的界限包括两个条件:一个是固定点操作员线性版所需的迭接数,另一个是近似误差引起的间接费用。在侧重于标度案例中的趋同时,所拟议的界限的紧凑性被证明是正面的四端第一阶差异方程式。