Structured problems arise in many applications. To solve these problems, it is important to leverage the structure information. This paper focuses on convex problems with a finite-sum compositional structure. Finite-sum problems appear as the sample average approximation of a stochastic optimization problem and also arise in machine learning with a huge amount of training data. One popularly used numerical approach for finite-sum problems is the stochastic gradient method (SGM). However, the additional compositional structure prohibits easy access to unbiased stochastic approximation of the gradient, so directly applying the SGM to a finite-sum compositional optimization problem (COP) is often inefficient. We design new algorithms for solving strongly-convex and also convex two-level finite-sum COPs. Our design incorporates the Katyusha acceleration technique and adopts the mini-batch sampling from both outer-level and inner-level finite-sum. We first analyze the algorithm for strongly-convex finite-sum COPs. Similar to a few existing works, we obtain linear convergence rate in terms of the expected objective error, and from the convergence rate result, we then establish complexity results of the algorithm to produce an $\varepsilon$-solution. Our complexity results have the same dependence on the number of component functions as existing works. However, due to the use of Katyusha acceleration, our results have better dependence on the condition number $\kappa$ and improve to $\kappa^{2.5}$ from the best-known $\kappa^3$. Finally, we analyze the algorithm for convex finite-sum COPs, which uses as a subroutine the algorithm for strongly-convex finite-sum COPs. Again, we obtain better complexity results than existing works in terms of the dependence on $\varepsilon$, improving to $\varepsilon^{-2.5}$ from the best-known $\varepsilon^{-3}$.
翻译:结构性问题在许多应用程序中出现。 要解决这些问题, 需要利用结构信息。 本文将重点集中在定额和成份结构的混结问题。 我们设计新的算法, 解决强额和2级定额优化问题的样本平均近似问题, 同时也在机器学习中出现大量培训数据。 用于定额和定额问题的一种普遍使用的数字方法是随机梯度方法( SGM) 。 然而, 额外的组成结构不允许容易获得对梯度的公正切合近似, 因此直接将 SGM 应用于定额和成份优化问题( COP) 往往效率不高。 我们设计新的算法, 解决强额和2级定额优化优化优化优化优化, 我们的设计包括卡秋莎加速技术, 采用从外部和内部定额的定额抽成份抽样方法( SGM) 。 我们首先分析强量的定额和定额COP的算法, 类似一些现有工程, 我们从预期目标错误的角度获得线性趋同的趋同率的趋近率, 和结点结果, 然后分析我们最复杂的成份的变数。