A key problem in the field of quantum computing is understanding whether quantum machine learning (QML) models implemented on noisy intermediate-scale quantum (NISQ) machines can achieve quantum advantages. Recently, Huang et al. [arXiv:2011.01938] partially answered this question by the lens of quantum kernel learning. Namely, they exhibited that quantum kernels can learn specific datasets with lower generalization error over the optimal classical kernel methods. However, most of their results are established on the ideal setting and ignore the caveats of near-term quantum machines. To this end, a crucial open question is: does the power of quantum kernels still hold under the NISQ setting? In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered. Concretely, we first prove that the advantage of quantum kernels is vanished for large size of datasets, few number of measurements, and large system noise. With the aim of preserving the superiority of quantum kernels in the NISQ era, we further devise an effective method via indefinite kernel learning. Numerical simulations accord with our theoretical results. Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.
翻译:量子计算领域的一个关键问题是了解在噪音中等规模量子机器上实施的量子机器学习模型(QML)能否取得量子优势。最近,黄等人[arXiv:2011.01938]通过量子内核学习的镜头部分回答了这个问题。也就是说,他们展示了量子内核可以学习具体的数据集,在最佳古典内核方法上,其偏差较低,量子内核可以学习具体的数据集。然而,它们的大多数结果是在理想设置上确定的,而忽视了近期量子机器的警告。为此,一个关键的未决问题是:量子内核的力量是否仍然维持在量子Q的设置之下?在本研究中,我们通过在考虑量子系统噪音和抽样错误时利用量子内核内核内核的力量来填补这一知识差距。具体地说,我们首先证明,量子内核内核的优势因大型数据集、测量数量不多和系统噪音而消失。为了维护量子内核机器在新Q时代的优势,我们进一步设计一个通过不朽核实验模型模型模型,从而获得我们Q级试验的优势。