We present a new approach to solve the sparse approximation or best subset selection problem, namely find a $k$-sparse vector ${\bf x}\in\mathbb{R}^d$ that minimizes the $\ell_2$ residual $\lVert A{\bf x}-{\bf y} \rVert_2$. We consider a regularized approach, whereby this residual is penalized by the non-convex $\textit{trimmed lasso}$, defined as the $\ell_1$-norm of ${\bf x}$ excluding its $k$ largest-magnitude entries. We prove that the trimmed lasso has several appealing theoretical properties, and in particular derive sparse recovery guarantees assuming successful optimization of the penalized objective. Next, we show empirically that directly optimizing this objective can be quite challenging. Instead, we propose a surrogate for the trimmed lasso, called the $\textit{generalized soft-min}$. This penalty smoothly interpolates between the classical lasso and the trimmed lasso, while taking into account all possible $k$-sparse patterns. The generalized soft-min penalty involves summation over $\binom{d}{k}$ terms, yet we derive a polynomial-time algorithm to compute it. This, in turn, yields a practical method for the original sparse approximation problem. Via simulations, we demonstrate its competitive performance compared to current state of the art.
翻译:我们提出了一个解决稀有近似或最精细选择问题的新方法, 即找到一个 $k$- scarse 矢量 $ $ $bfxxx in\\ mathb{R ⁇ d$, 以最小化 $ ell_ 2$ 剩余 $\ lVert A\ bfx} - bf y y}\ rVert_ 2$ 。 我们考虑一种常规化的方法, 通过这种方法, 将剩余量由非conx $\ textitle{ triple laso} 来惩罚, 定义为$xxxxxxx 美元, 不包括美元xxxxxxxx 美元的最大放大条目。 我们证明, 三角色的 lasso 具有若干吸引人的理论属性, 特别是以成功优化受罚目标为假设的零星回收保证。 其次, 我们用经验显示, 直接优化这个目标可能具有挑战性。 相反, 我们提议三色调的硬度 的硬度 的硬度 。