We consider the stochastic optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime, where the stochastic gradient's noise is assumed to have bounded $p$th moment ($p\in(1,2]$). Zhang et al. (2020) is the first to prove the $\Omega(T^{\frac{1-p}{3p-2}})$ lower bound for convergence (in expectation) and provides a simple clipping algorithm that matches this optimal rate. Cutkosky and Mehta (2021) proposes another algorithm, which is shown to achieve the nearly optimal high-probability convergence guarantee $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, where $\delta$ is the probability of failure. However, this desirable guarantee is only established under the additional assumption that the stochastic gradient itself is bounded in $p$th moment, which fails to hold even for quadratic objectives and centered Gaussian noise. In this work, we first improve the analysis of the algorithm in Cutkosky and Mehta (2021) to obtain the same nearly optimal high-probability convergence rate $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, without the above-mentioned restrictive assumption. Next, and curiously, we show that one can achieve a faster rate than that dictated by the lower bound $\Omega(T^{\frac{1-p}{3p-2}})$ with only a tiny bit of structure, i.e., when the objective function $F(x)$ is assumed to be in the form of $\mathbb{E}_{\Xi\sim\mathcal{D}}[f(x,\Xi)]$, arguably the most widely applicable class of stochastic optimization problems. For this class of problems, we propose the first variance-reduced accelerated algorithm and establish that it guarantees a high-probability convergence rate of $O(\log(T/\delta)T^{\frac{1-p}{2p-1}})$ under a mild condition, which is faster than $\Omega(T^{\frac{1-p}{3p-2}})$. Notably, even when specialized to the finite-variance case, our result yields the (near-)optimal high-probability rate $O(\log(T/\delta)T^{-1/3})$.
翻译:我们以平滑( 平滑) { 平滑( 平滑) 但不一定是共解目标的优化优化优化问题 {( 平滑) {( 平滑) { 平滑( 平滑)( 平滑) { 平滑( 平滑) { 平滑( 平滑)( 平滑)( 平滑)( 平滑)( 2021) 提出另一种算法, 以达到近乎最佳的高概率趋同( 平滑( T/ delta) 的噪音假设 $( p\ ( 1, 2, 2美元) 。 张张相平滑( 1 - p) 平滑( 平滑) 平滑( 平滑( 平滑) 平滑( 平滑) 平滑( 平滑( 2021) 提出另一种算法。 在这项工作中, 我们首先改进了对 平滑( 平滑) 平滑( 平滑) 平滑( 平滑) 平滑( 平滑) 平滑( 平滑) 平滑( 平滑) 平滑) 平滑( 平滑) 平滑( 平滑( 平滑) ( 平滑) ( 平滑) 平滑) 平滑( 平滑) 平滑( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 的 的 平滑) ( 平滑) ( 平滑) ( 平滑) ( 的 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( 平滑) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 的 平滑) ( 平滑) ( ) ( 平滑) ( 平滑) ( 平滑) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 平滑) ( 平滑) ( 平