In this work, we consider the algorithm to the (nonlinear) regression problems with $\ell_0$ penalty. The existing algorithms for $\ell_0$ based optimization problem are often carried out with a fixed step size, and the selection of an appropriate step size depends on the restricted strong convexity and smoothness for the loss function, hence it is difficult to compute in practical calculation. In sprite of the ideas of support detection and root finding \cite{HJK2020}, we proposes a novel and efficient data-driven line search rule to adaptively determine the appropriate step size. We prove the $\ell_2$ error bound to the proposed algorithm without much restrictions for the cost functional. A large number of numerical comparisons with state-of-the-art algorithms in linear and logistic regression problems show the stability, effectiveness and superiority of the proposed algorithms.
翻译:在这项工作中,我们考虑对(非线性)回归问题的算法,以$\ ell_ 0$罚款。基于 $\ ell_ 0$优化问题的现有算法往往以固定的步数大小来进行,而适当的步数大小的选择取决于损失函数的严格精度和平稳度,因此难以在实际计算中计算。在引用支持检测和根查发现\ cite{HJK20}的理念时,我们提出了一个新的高效数据驱动的行数搜索规则,以适应性地决定适当的步数大小。我们证明,$\ ell_ 2$的错误与拟议的算法有关,对成本功能没有很大的限制。在线性和后勤性回归问题中与最先进的算法进行的大量数字比较表明了拟议算法的稳定性、有效性和优越性。