We propose ZeroSARAH -- a novel variant of the variance-reduced method SARAH (Nguyen et al., 2017) -- for minimizing the average of a large number of nonconvex functions $\frac{1}{n}\sum_{i=1}^{n}f_i(x)$. To the best of our knowledge, in this nonconvex finite-sum regime, all existing variance-reduced methods, including SARAH, SVRG, SAGA and their variants, need to compute the full gradient over all $n$ data samples at the initial point $x^0$, and then periodically compute the full gradient once every few iterations (for SVRG, SARAH and their variants). Moreover, SVRG, SAGA and their variants typically achieve weaker convergence results than variants of SARAH: $n^{2/3}/\epsilon^2$ vs. $n^{1/2}/\epsilon^2$. ZeroSARAH is the first variance-reduced method which does not require any full gradient computations, not even for the initial point. Moreover, ZeroSARAH obtains new state-of-the-art convergence results, which can improve the previous best-known result (given by e.g., SPIDER, SpiderBoost, SARAH, SSRGD and PAGE) in certain regimes. Avoiding any full gradient computations (which is a time-consuming step) is important in many applications as the number of data samples $n$ usually is very large. Especially in the distributed setting, periodic computation of full gradient over all data samples needs to periodically synchronize all machines/devices, which may be impossible or very hard to achieve. Thus, we expect that ZeroSARAH will have a practical impact in distributed and federated learning where full device participation is impractical.
翻译:我们建议ZeroSARAH -- -- 差异降低法SARAH(Nguyen等人,2017年)的一种新变体 -- -- 将大量非convex函数的平均值降至最低 $frac{1 ⁇ n ⁇ sum ⁇ i=1 ⁇ n}f_i(x)美元。据我们所知,在这种非cavex有限总制度中,所有现有的差异降低法,包括SARAH、SVRG、SAGA及其变体,都需要在初始点对所有美元的数据样本进行完全的梯度计算,然后每隔几次定期计算一次计算完全的convex函数。此外,SVRGAGA及其变体通常比SARAH变体的变体:$n ⁇ 2}/epislon2美元(SARAGAA及其变体的所有变体,通常需要全部的变数) 。ZeroSARA是第一个差异调整法方法,通常不需要任何完全的递增时间计算结果,甚至不会在初始点上, ASAHARC的计算。