In this work, we propose the {Randomized Coordinate Subgradient method} (RCS) for solving nonsmooth convex and nonsmooth nonconvex (nonsmooth weakly convex) optimization problems. RCS randomly selects one block coordinate to update at each iteration, making it more practical than updating all coordinates. We consider the linearly bounded subgradients assumption for the objective function, which is more general than the traditional Lipschitz continuity assumption, to account for practical scenarios. We then conduct thorough convergence analysis for RCS in both convex and nonconvex cases based on this generalized Lipschitz-type assumption. Specifically, we establish the $\widetilde{\mathcal{O}}(1/\sqrt{k})$ convergence rate in expectation and the $\tilde o(1/\sqrt{k})$ almost sure asymptotic convergence rate in terms of suboptimality gap when $f$ is nonsmooth convex. If $f$ further satisfies the global quadratic growth condition, the improved $\mathcal{O}(1/k)$ rate is shown in terms of the squared distance to the optimal solution set. For the case when $f$ is nonsmooth weakly convex and its subdifferential satisfies the global metric subregularity property, we derive the $\mathcal{O}(1/T^{1/4})$ iteration complexity in expectation, where $T$ is the total number of iterations. We also establish an asymptotic convergence result. To justify the global metric subregularity property utilized in the analysis, we establish this error bound condition for the concrete (real valued) robust phase retrieval problem, which is of independent interest. We provide a convergence lemma and the relationship between the global metric subregularity properties of a weakly convex function and its Moreau envelope, which are also of independent interest. Finally, we conduct several experiments to demonstrate the possible superiority of RCS over the subgradient method.
翻译:在本工作中,我们提出了一种用于解决非光滑凸和非光滑非凸(非光滑弱凸)优化问题的{随机坐标次梯度法}(RCS)。RCS在每次迭代中随机选择一个块的坐标进行更新,比同时更新所有坐标更实用。我们考虑目标函数的线性有界亚梯度假设,这比传统的利普希茨连续性假设更一般,以应对实际情况。然后,我们基于这种广义的利普希茨类型假设,在凸和非凸情况下进行了彻底的收敛性分析。具体而言,我们建立了期望中的$\widetilde{\mathcal{O}}(1/\sqrt{k})$收敛速率和近似最优解间隙$\tilde o(1/\sqrt{k})$的几乎必然的渐近收敛速率,当$f$是非光滑凸函数时。如果$f$进一步满足全局二次增长条件,则在与最优解集的平方距离有关的项中证明了改进的$\mathcal{O}(1/k)$速率。对于当$f$为非光滑弱凸函数且其代理满足全局度量子规则性质时,我们在期望值为$\mathcal{O}(1/T^{1/4})$的迭代复杂度下推导了结果,其中$T$是总迭代次数。我们还建立了一个渐近收敛结果。为了证明使用的全局度量子规则属性,我们针对具体(实值)的鲁棒相位恢复问题建立了这个误差界条件,这是独立利益所在。我们提供了一个收敛引理和一个弱凸函数及其Moreau包络的全局度量子规则属性之间的关系,这也是独立利益所在。最后,我们进行了几项实验,以展示RCS可能优于次梯度方法。