Classification is often the first problem described in introductory machine learning classes. Generalization guarantees of classification have historically been offered by Vapnik-Chervonenkis theory. Yet those guarantees are based on intractable algorithms, which has led to the theory of surrogate methods in classification. Guarantees offered by surrogate methods are based on calibration inequalities, which have been shown to be highly sub-optimal under some margin conditions, failing short to capture exponential convergence phenomena. Those "super" fast rates are becoming to be well understood for smooth surrogates, but the picture remains blurry for non-smooth losses such as the hinge loss, associated with the renowned support vector machines. In this paper, we present a simple mechanism to obtain fast convergence rates and we investigate its usage for SVM. In particular, we show that SVM can exhibit exponential convergence rates even without assuming the hard Tsybakov margin condition.
翻译:分类通常是入门机器学习班中描述的第一个问题。一般分类保证历来由Vapnik-Chervonenkis理论提供,但这些保证是基于棘手的算法,从而导致了代用方法的分类理论。代用方法的担保基于校准不平等,在一定的差幅条件下,这种不平等被证明是高度次优的,不能很快地捕捉指数趋同现象。这些“超”快速率正逐渐被人们很好地理解为平稳的代用率,但是与著名的辅助矢量机有关的链条损失等非线性损失的图象仍然模糊不清。在本文中,我们提出了一个获得快速趋同率的简单机制,并调查SVM的使用情况。特别是,我们表明SVM即使在不承担坚硬的Tsybakov边距条件的情况下,SVM也可以表现出指数趋同率。