Support vector machine (SVM) has attracted great attentions for the last two decades due to its extensive applications, and thus numerous optimization models have been proposed. To distinguish all of them, in this paper, we introduce a new model equipped with an $L_{0/1}$ soft-margin loss (dubbed as $L_{0/1}$-SVM) which well captures the nature of the binary classification. Many of the existing convex/non-convex soft-margin losses can be viewed as a surrogate of the $L_{0/1}$ soft-margin loss. Despite the discrete nature of $L_{0/1}$, we manage to establish the existence of global minimizer of the new model as well as revealing the relationship among its minimizers and KKT/P-stationary points. These theoretical properties allow us to take advantage of the alternating direction method of multipliers. In addition, the $L_{0/1}$-support vector operator is introduced as a filter to prevent outliers from being support vectors during the training process. Hence, the method is expected to be relatively robust. Finally, numerical experiments demonstrate that our proposed method generates better performance in terms of much shorter computational time with much fewer number of support vectors when against with some other leading methods in areas of SVM. When the data size gets bigger, its advantage becomes more evident.
翻译:过去二十年来,由于应用范围广泛,支持矢量机(SVM)因其应用范围广泛,因此吸引了极大关注,因此提出了许多优化模型。为了区别所有这些模型,我们在本文件中引入了一个新的模型,该模型配备的软边损耗值为$L ⁇ 0/1美元(dubbbed as $L ⁇ 0/1美元/美元-SVM),这很好地反映了二进制分类的性质。许多现有的 convex/non-convex软边损耗值可以视为软边损值的更大代谢。尽管美元为$L ⁇ 0/1美元,但为了区别所有这些代谢,我们设法建立了新的模型的全球最小化器,并揭示了软边损耗值($L ⁇ 0/1美元/美元)与软边损耗值之间的关系。这些理论性质使我们能够利用乘数的交替方向方法。此外,在培训过程中,现有的支持矢量操作器可以被视为一个更大的过滤器,防止外值成为向矢量损失。因此,尽管美元为$L ⁇ 0/1美元,但新模型的离差性质不同,我们设法建立全球最小的最小的最小的最小的最小的模型,最终的计算方法将比更接近于其他矢量。