The linear Support Vector Machine (SVM) is a classic classification technique in machine learning. Motivated by applications in modern high dimensional statistics, we consider penalized SVM problems involving the minimization of a hinge-loss function with a convex sparsity-inducing regularizer such as: the L1-norm on the coefficients, its grouped generalization and the sorted L1-penalty (aka Slope). Each problem can be expressed as a Linear Program (LP) and is computationally challenging when the number of features and/or samples is large -- the current state of algorithms for these problems is rather nascent when compared to the usual L2-regularized linear SVM. To this end, we propose new computational algorithms for these LPs by bringing together techniques from (a) classical column (and constraint) generation methods and (b) first order methods for non-smooth convex optimization -- techniques that are rarely used together for solving large scale LPs. These components have their respective strengths; and while they are found to be useful as separate entities, they have not been used together in the context of solving large scale LPs such as the ones studied herein. Our approach complements the strengths of (a) and (b) -- leading to a scheme that seems to significantly outperform commercial solvers as well as specialized implementations for these problems. We present numerical results on a series of real and synthetic datasets demonstrating the surprising effectiveness of classic column/constraint generation methods in the context of challenging LP-based machine learning tasks.
翻译:线性支持矢量机(SVM)是机器学习的经典分类技术。在现代高维统计应用的激励下,我们考虑受罚的SVM问题,涉及将一个关节损失函数与一个螺旋加速度诱导常规化的调序器最小化,例如:系数的L1-诺姆、其组式概括和分类L1-线性(aka Slope)的L1-线性生成法。每个问题都可以以线性程序(LP)来表达,当特性和(或)样本数量庞大时,在计算上具有挑战性 -- -- 与通常的L2正规化线性SVM相比,这些问题的算法状态相当新生。为此,我们提出这些LP的新的计算算法,方法是将(a) 经典列(和约束性) 生成方法和(b) 非摩擦线性 convex优化的第一顺序方法 -- -- 这些技术很少一起用于解决大型LPs(LP) 。这些组成部分具有各自的强度;虽然它们被视为有用的独立实体,但是它们作为通常的L2-正规性线性线性线性线性线性计算法的一部分,它们并没有用来作为大规模的模型的模型的模型的一部分。