Finite-sum optimization plays an important role in the area of machine learning, and hence has triggered a surge of interest in recent years. To address this optimization problem, various randomized incremental gradient methods have been proposed with guaranteed upper and lower complexity bounds for their convergence. Nonetheless, these lower bounds rely on certain conditions: deterministic optimization algorithm, or fixed probability distribution for the selection of component functions. Meanwhile, some lower bounds even do not match the upper bounds of the best known methods in certain cases. To break these limitations, we derive tight lower complexity bounds of randomized incremental gradient methods, including SAG, SAGA, SVRG, and SARAH, for two typical cases of finite-sum optimization. Specifically, our results tightly match the upper complexity of Katyusha or VRADA when each component function is strongly convex and smooth, and tightly match the upper complexity of SDCA without duality and of KatyushaX when the finite-sum function is strongly convex and the component functions are average smooth.
翻译:精度和精度优化在机器学习领域起着重要作用,因此近年来引起了人们的兴趣。为了解决这一优化问题,提出了各种随机递增梯度方法,保证其趋同的上下复杂界限。尽管如此,这些较低界限取决于某些条件:确定性优化算法,或选择部件功能的固定概率分布。与此同时,某些较低界限甚至与某些情况下最已知方法的上限不符。为了打破这些限制,我们获得了包括SAG、SAGA、SVRG和SAAH在内的随机递增梯度方法的较紧的更低复杂界限,这两类典型的定额和优化。具体地说,当每个部件功能都具有很强的连接和光滑度时,我们的结果与卡秋莎或VRADA的上复杂程度密切匹配,并且在有限和部分功能高度均匀时,我们的结果与SDCA的高度复杂性和卡秋莎X的高度复杂性紧密匹配。