Support Vector Machines (SVMs) are among the most fundamental tools for binary classification. In its simplest formulation, an SVM produces a hyperplane separating two classes of data using the largest possible margin to the data. The focus on maximizing the margin has been well motivated through numerous generalization bounds. In this paper, we revisit and improve the classic generalization bounds in terms of margins. Furthermore, we complement our new generalization bound by a nearly matching lower bound, thus almost settling the generalization performance of SVMs in terms of margins.
翻译:支持矢量机(SVM)是二进制分类的最基本工具之一。SVM在其最简单的配方中,利用数据的最大差值将两类数据分离出超高机率。通过许多概括性界限,对最大差值的侧重得到了很好的激励。在本文中,我们重新审视并改进了典型的边距一般化界限。此外,我们补充了我们新的一般化,其约束是近似匹配的下限,从而几乎解决了SVMs在边距方面的一般化性能。