Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in risk-averse machine learning, semisupervised learning, and robust optimization among others. In this paper, we first present a novel Constraint Extrapolation (ConEx) method for solving convex functional constrained problems, which utilizes linear approximations of the constraint functions to define the extrapolation (or acceleration) step. We show that this method is a unified algorithm that achieves the best-known rate of convergence for solving different functional constrained convex composite problems, including convex or strongly convex, and smooth or nonsmooth problems with a stochastic objective and/or stochastic constraints. Many of these rates of convergence were in fact obtained for the first time in the literature. In addition, ConEx is a single-loop algorithm that does not involve any penalty subproblems. Contrary to existing primal-dual methods, it does not require the projection of Lagrangian multipliers into a (possibly unknown) bounded set. Second, for nonconvex functional constrained problems, we introduce a new proximal point method that transforms the initial nonconvex problem into a sequence of convex problems by adding quadratic terms to both the objective and constraints. Under a certain MFCQ-type assumption, we establish the convergence and rate of convergence of this method to KKT points when the convex subproblems are solved exactly or inexactly. For large-scale and stochastic problems, we present a more practical proximal point method in which the approximate solutions of the subproblems are computed by the aforementioned ConEx method. To the best of our knowledge, most of these convergence and complexity results of the proximal point method for nonconvex problems also seem to be new in the literature.
翻译:功能限制优化在机器学习和操作研究中变得越来越重要。 这些问题在反风险机器学习、 半监督学习和强力优化等方面都有潜在的应用。 在本文中, 我们首先展示了一种解决 Convex 功能约束问题的新型约束外推法( ConEx), 这种方法使用约束函数的线性近似值来定义外推( 加速) 步骤。 我们显示, 这种方法是一种统一的算法, 能够达到最知名的趋同速度, 解决不同功能限制的 convex 复合问题, 包括 convex 或强烈的 convex, 以及具有随机目的和/ 或随机限制的平滑或非摩擦问题。 此外, ComEx 是一种最佳的单线性算法, 来定义外推法( 加速) 。 与现有的原始方法相反, 它不需要将Lagranjaix 的趋同乘数加到一个( 可能不为未知的) 捆绑定点。 其次, 对于非趋同的趋同点来说, 趋同点的趋同点, 将我们目前这个变近的变近的变近的变近的变近的解法 方法 的Mexx 质的 质的解方法 进成为一个新的方法 。