Classification is a common task in machine learning. Random features (RFs) stand as a central technique for scalable learning algorithms based on kernel methods, and more recently proposed optimized random features, sampled depending on the model and the data distribution, can significantly reduce and provably minimize the required number of features. However, existing research on classification using optimized RFs has suffered from computational hardness in sampling each optimized RF; moreover, it has failed to achieve the exponentially fast error-convergence speed that other state-of-the-art kernel methods can achieve under a low-noise condition. To overcome these slowdowns, we here construct a classification algorithm with optimized RFs accelerated by means of quantum machine learning (QML) and study its runtime to clarify overall advantage. We prove that our algorithm can achieve the exponential error convergence under the low-noise condition even with optimized RFs; at the same time, our algorithm can exploit the advantage of the significant reduction of the number of features without the computational hardness owing to QML. These results discover a promising application of QML to acceleration of the leading kernel-based classification algorithm without ruining its wide applicability and the exponential error-convergence speed.
翻译:随机特征(RFs)作为基于内核方法的可缩放学习算法的核心技术,最近还提议优化随机特征,根据模型和数据分布进行抽样,可以大大减少和可以想象地最大限度地减少所需特征的数量;然而,目前关于使用优化的RF的分类研究由于每个优化的RF取样的计算硬度而受到影响;此外,它未能达到其他最先进内核方法在低噪声条件下可以达到的指数性快速误差趋同速度;为了克服这些减速,我们在此建立一个以最优化的RFs的分类算法,通过量子机学习加速优化RFs,并研究其运行时间以澄清总体优势;我们证明我们的算法可以在低噪音条件下实现指数性误差趋同,即使采用最优化的RFs;同时,我们的算法可以利用其他最先进的内核内核法方法在不计算硬性的情况下大量减少特征的优势。