With noisy intermediate-scale quantum computers showing great promise for near-term applications, a number of machine learning algorithms based on parametrized quantum circuits have been suggested as possible means to achieve learning advantages. Yet, our understanding of how these quantum machine learning models compare, both to existing classical models and to each other, remains limited. A big step in this direction has been made by relating them to so-called kernel methods from classical machine learning. By building on this connection, previous works have shown that a systematic reformulation of many quantum machine learning models as kernel models was guaranteed to improve their training performance. In this work, we first extend the applicability of this result to a more general family of parametrized quantum circuit models called data re-uploading circuits. Secondly, we show, through simple constructions and numerical simulations, that models defined and trained variationally can exhibit a critically better generalization performance than their kernel formulations, which is the true figure of merit of machine learning tasks. Our results constitute another step towards a more comprehensive theory of quantum machine learning models next to kernel formulations.
翻译:由于中间级量子计算机吵闹,对近期应用的前景大有希望,一些基于准美化量子电路的机器学习算法被建议作为实现学习优势的可能手段。然而,我们对量子机学习模型如何与现有古典模型和彼此进行比较的理解仍然有限。我们通过将这些模型与古典机器学习的所谓内核方法联系起来,朝这个方向迈出了一大步。通过建立这一联系,以前的工程表明,系统重订许多量子机学习模型,作为内核模型,可以保证改进它们的培训绩效。在这项工作中,我们首先将这一结果的应用范围扩大到称为数据再加载电路的更普通的量子量子电路模型。第二,我们通过简单的构造和数字模拟,表明定义和训练的变异模型能够比其内核的配方表现得更好得多。这是机器学习任务的真正优点。我们的成果是朝着更全面地理论量子机学习模型的又一步。