Due to the non-smoothness of the Hinge loss in SVM, it is difficult to obtain a faster convergence rate with modern optimization algorithms. In this paper, we introduce two smooth Hinge losses $\psi_G(\alpha;\sigma)$ and $\psi_M(\alpha;\sigma)$ which are infinitely differentiable and converge to the Hinge loss uniformly in $\alpha$ as $\sigma$ tends to $0$. By replacing the Hinge loss with these two smooth Hinge losses, we obtain two smooth support vector machines(SSVMs), respectively. Solving the SSVMs with the Trust Region Newton method (TRON) leads to two quadratically convergent algorithms. Experiments in text classification tasks show that the proposed SSVMs are effective in real-world applications. We also introduce a general smooth convex loss function to unify several commonly-used convex loss functions in machine learning. The general framework provides smooth approximation functions to non-smooth convex loss functions, which can be used to obtain smooth models that can be solved with faster convergent optimization algorithms.
翻译:由于SVM中Hinge损失的不均匀性,很难以现代优化算法获得更快的趋同率。 在本文中, 我们引入了两种顺畅的 Hinge损失 $\psi_ G( ALpha;\ sgma) 和 $\ psi_ M( ALpha;\ sgma) 和 $\ psi_ M( ALpha;\ sgma) 。 这两种损失极不相异, 平均以 $ 为单位, 与 Hinge 损失相趋同, 因为 $\ gmax 通常为 0. $ 。 通过用这两种平滑的 Hinge 损失来取代 Hinge 损失, 我们分别获得了两台光滑的支持矢量机( SSVMs) 。 用信任区域 牛顿 方法( TRONO) 解析 使 SSVMS MMs 导致两种四方趋一致的算算法。 。 实验显示, 拟议的 SSVMMs 在现实世界应用中有效。 我们还光损失函数可以更快地解, 。 我们还引入一个一般的 convex 损失函数。