Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance. In addition, we show that the complexity is further improvable in terms of the dependence on some problem parameter, via a restart scheme that calls the accelerated method repeatedly. As an application, we consider sparsity-inducing constrained optimization which has a separable convex objective and a strongly convex loss constraint. In addition to achieving fast convergence, we show that the restarted method can effectively identify the sparsity pattern (active-set) of the optimal solution in finite steps. To the best of our knowledge, this is the first active-set identification result for sparsity-inducing constrained optimization.
翻译:康韦克斯功能限制优化最近引起了越来越多的研究兴趣。 对于一个特别的 convex 问题, 这个问题具有很强的 convex 功能限制, 我们开发了一种新的加速的初等初等第一顺序方法, 获得美元( 1/\ sqrt\ vep}) 的复杂度, 改进了美元( 1/ \ vep} ) 最先进的第一阶方法。 我们开发的关键元素是一些新颖的技术, 以逐步估计拉格朗格函数的强烈共和性, 从而使得适应性的步数选择和更快的趋同性能。 此外, 我们通过一个反复调用加速法的重新启用方案, 来显示对一些问题参数的依赖性更加不易。 作为一种应用, 我们把具有可分解的 convex 目的和强烈的 convex 损失制约的松动性优化性考虑。 除了实现快速趋同性外, 我们显示, 重新启用的方法可以有效地识别最优化的步态解决方案的松动性模式( 积极设置 ) 。 对于最佳的优化优化的优化的优化定位结果来说, 最优化的优化的优化的自我定位是最佳的自我定位结果。