In optimization, it is known that when the objective functions are strictly convex and well-conditioned, gradient-based approaches can be extremely effective, e.g., achieving the exponential rate of convergence. On the other hand, the existing Lasso-type estimator in general cannot achieve the optimal rate due to the undesirable behavior of the absolute function at the origin. A homotopic method is to use a sequence of surrogate functions to approximate the $\ell_1$ penalty that is used in the Lasso-type of estimators. The surrogate functions will converge to the $\ell_1$ penalty in the Lasso estimator. At the same time, each surrogate function is strictly convex, which enables a provable faster numerical rate of convergence. In this paper, we demonstrate that by meticulously defining the surrogate functions, one can prove a faster numerical convergence rate than any existing methods in computing for the Lasso-type of estimators. Namely, the state-of-the-art algorithms can only guarantee $O(1/\epsilon)$ or $O(1/\sqrt{\epsilon})$ convergence rates, while we can prove an $O([\log(1/\epsilon)]^2)$ for the newly proposed algorithm. Our numerical simulations show that the new algorithm also performs better empirically.
翻译:在优化方面,人们知道,当目标功能是严格固定和完善的时,基于梯度的方法可能非常有效,例如,达到指数趋同率。另一方面,现有的Lasso 类型估计器一般由于源头绝对函数的不良行为,无法达到最佳的趋同率。一个同质法是使用一种代用功能序列,以近似于在测算器的激光器类型中使用的$\ell_1美元罚款。代用算法将汇集到激光测算器中的$\ell_1美元罚款。同时,每个代用函数都是严格的调和率,从而使得可以更快地实现最佳的趋同率。在本文中,我们通过仔细地定义代用函数,可以证明数字趋同率比在测算器的激光器类型中使用的任何现行计算方法都快。也就是说,状态算法只能保证在激光定值中以$(1/%)$计算出新的运算法,而我们所提议的总和A$(1/\) 美元的算算法也能够显示我们所提议的总算率。</s>