We propose a quantum algorithm for training nonlinear support vector machines (SVM) for feature space learning where classical input data is encoded in the amplitudes of quantum states. Based on the classical SVM-perf algorithm of Joachims, our algorithm has a running time which scales linearly in the number of training examples $m$ (up to polylogarithmic factors) and applies to the standard soft-margin $\ell_1$-SVM model. In contrast, while classical SVM-perf has demonstrated impressive performance on both linear and nonlinear SVMs, its efficiency is guaranteed only in certain cases: it achieves linear $m$ scaling only for linear SVMs, where classification is performed in the original input data space, or for the special cases of low-rank or shift-invariant kernels. Similarly, previously proposed quantum algorithms either have super-linear scaling in $m$, or else apply to different SVM models such as the hard-margin or least squares $\ell_2$-SVM which lack certain desirable properties of the soft-margin $\ell_1$-SVM model. We classically simulate our algorithm and give evidence that it can perform well in practice, and not only for asymptotically large data sets.
翻译:我们提出用于培训非线性支持矢量机器(SVM) 的量子算法, 用于在量子状态的振幅中将古典输入数据编码成星体空间学习。 根据若阿希姆斯古典SVM-perf 算法, 我们的算法有一个运行时间, 以培训范例数量为线性比例, 包括$m( 直至多logritical 系数), 并适用于标准软边距 $\ ell_ 1$- SVM 模型。 相比之下, 传统的 SVM-perf 显示线性和非线性SVM 模型的惊人性能, 只有在某些情况下, 其效率才会得到保证: 仅对线性SVMMs进行线性美元缩放缩放, 在原始输入数据空间中进行分类, 或对低级或变动内内内内内核的特例进行线性缩放量算法, 或者适用于不同的SVMM模型, 例如硬质或最小方值 $_ $2$- SVM, 在软性模型中不具有某种理想的模型的特性。