Quantum kernel methods are a candidate for quantum speed-ups in supervised machine learning. The number of quantum measurements $N$ required for a reasonable kernel estimate is a critical resource, both from complexity considerations and because of the constraints of near-term quantum hardware. We emphasize that for classification tasks, the aim is accurate classification and not accurate kernel evaluation, and demonstrate that the former is more resource efficient. In general, the uncertainty in the quantum kernel, arising from finite sampling, leads to misclassifications over some kernel instantiations. We introduce a suitable performance metric that characterizes the robustness or reliability of classification over a dataset, and obtain a bound for $N$ which ensures, with high probability, that classification errors over a dataset are bounded by the margin errors of an idealized quantum kernel classifier. Using techniques of robust optimization, we then show that the number of quantum measurements can be significantly reduced by a robust formulation of the original support vector machine. We consider the SWAP test and the GATES test quantum circuits for kernel evaluations, and show that the SWAP test is always less reliable than the GATES test for any $N$. Our strategy is applicable to uncertainty in quantum kernels arising from {\em any} source of noise, although we only consider the statistical sampling noise in our analysis.
翻译:量子内核方法是受监督的机器学习量子加速的候选方法。 合理内核估计所需的量子测量量量量数数量是关键资源, 既来自复杂因素,也来自短期量子硬件的限制。 我们强调,对于分类任务而言,目标是准确分类,而不是准确的内核评估,并表明前者的资源效率更高。 一般而言,量子内核的不确定性,由有限取样产生,导致某些内核即时的分类错误。 我们引入了一种适当的性能衡量标准,该标准是数据集分类的稳健性或可靠性的特点,并获得一个以美元为基准的定值,该标准能确保数据集的分类误差与理想性量子内核分析器的差差差相约束。 我们使用强力优化技术,然后表明量内核测量的数量数量可以通过原始支持矢量机的稳健配制而大幅减少。 我们考虑SWAP测试和GATES测试内核电路路, 显示SWAP测试从我们的任何数值分析中,但从我们的数据源中,从我们的任何量基值分析中,从我们的任何数据源的不确定性中,也只考虑。