A key problem in the field of quantum computing is understanding whether quantum machine learning (QML) models implemented on noisy intermediate-scale quantum (NISQ) machines can achieve quantum advantages. Recently, Huang et al. [Nat Commun 12, 2631] partially answered this question by the lens of quantum kernel learning. Namely, they exhibited that quantum kernels can learn specific datasets with lower generalization error over the optimal classical kernel methods. However, most of their results are established on the ideal setting and ignore the caveats of near-term quantum machines. To this end, a crucial open question is: does the power of quantum kernels still hold under the NISQ setting? In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered. Concretely, we first prove that the advantage of quantum kernels is vanished for large size of datasets, few number of measurements, and large system noise. With the aim of preserving the superiority of quantum kernels in the NISQ era, we further devise an effective method via indefinite kernel learning. Numerical simulations accord with our theoretical results. Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.
翻译:量子计算领域的一个关键问题是了解在噪音中等规模量子机器上实施的量子机器学习模型(QML)能否取得量子优势。最近,黄等人(Nat Commun 12,2631)从量子内核学习的镜头中部分回答了这个问题。也就是说,它们展示了量子内核可以学习具体的数据集,在最优的经典内核方法上一般化错误较低。然而,它们的大多数结果是在理想的设置上确定的,忽视了近期量子机器的警告。为此,一个关键的未决问题是:在量子内核设置下,量子内核的力量是否仍然保持着量子优势?在本研究中,我们在考虑量子系统噪音和抽样错误时,通过利用量子内核内核的能量内核动力来填补这一知识差距。具体地说,我们首先证明,量子内核内核的优势因大型数据集、测量数量不多和系统噪音而消失。为了保持量子内核机器在新QQ时代的优越性,我们进一步设计一种有效的方法,通过不限核内核实验性试验,从而获得我们的高级核实验结果。