Machine learning algorithms based on parametrized quantum circuits are a prime candidate for near-term applications on noisy quantum computers. Yet, our understanding of how these quantum machine learning models compare, both mutually and to classical models, remains limited. Previous works achieved important steps in this direction by showing a close connection between some of these quantum models and kernel methods, well-studied in classical machine learning. In this work, we identify the first unifying framework that captures all standard models based on parametrized quantum circuits: that of linear quantum models. In particular, we show how data re-uploading circuits, a generalization of linear models, can be efficiently mapped into equivalent linear quantum models. Going further, we also consider the experimentally-relevant resource requirements of these models in terms of qubit number and data-sample efficiency, i.e., amount of data needed to learn. We establish learning separations demonstrating that linear quantum models must utilize exponentially more qubits than data re-uploading models in order to solve certain learning tasks, while kernel methods additionally require exponentially many more data points. Our results constitute significant strides towards a more comprehensive theory of quantum machine learning models as well as provide guidelines on which models may be better suited from experimental perspectives.
翻译:以美化量子电路为基础的机器学习算法是近期在噪音量子计算机上应用的首要选择。然而,我们对这些量子机器学习模型如何相互比较和与古典模型比较的理解仍然有限。以前的工作在这方面迈出了重要步骤,展示了一些量子模型和内核方法之间的紧密联系,这些模型和内核方法在古典机器学习中得到了很好的研究。在这项工作中,我们确定了第一个统一框架,其中捕捉了所有基于光化量子电路的标准模型:线性量子模型。特别是,我们展示了如何有效地将这些量子机器学习模型的重新加载电路(即线性模型的概括化)绘制成等量子模型。我们进一步考虑这些模型的实验相关资源要求,即量子数和数据抽样效率,即学习所需的数据数量。我们建立了学习分离方法,表明线性量子模型必须使用指数性比数据再加压的量子模型,以便解决某些学习任务,同时对线性模型的概括性方法也需要指数性更多的数据。我们的结果可以作为更精确的实验性模型,作为更适合的实验性模型。