Machine learning algorithms based on parametrized quantum circuits are prime candidates for near-term applications on noisy quantum computers. In this direction, various types of quantum machine learning models have been introduced and studied extensively. Yet, our understanding of how these models compare, both mutually and to classical models, remains limited. In this work, we identify a constructive framework that captures all standard models based on parametrized quantum circuits: that of linear quantum models. In particular, we show using tools from quantum information theory how data re-uploading circuits, an apparent outlier of this framework, can be efficiently mapped into the simpler picture of linear models in quantum Hilbert spaces. Furthermore, we analyze the experimentally-relevant resource requirements of these models in terms of qubit number and amount of data needed to learn. Based on recent results from classical machine learning, we prove that linear quantum models must utilize exponentially more qubits than data re-uploading models in order to solve certain learning tasks, while kernel methods additionally require exponentially more data points. Our results provide a more comprehensive view of quantum machine learning models as well as insights on the compatibility of different models with NISQ constraints.
翻译:使用量子电路的机器学习算法是紧凑量子计算机近期应用的首要选择。 在这方面,已经引入并广泛研究了各类量子机器学习模型。 然而,我们对这些模型如何相互比较和与古典模型进行比较的理解仍然有限。 在这项工作中,我们确定了一个建设性的框架,根据量子电路收集所有标准模型:线性量子模型。特别是,我们从量子信息理论中展示了如何将数据再加载电路(这个框架的明显外端线线线线线线线线模型)有效映射成海尔伯特量子空间线性模型的更简单图画。此外,我们分析了这些模型的实验相关资源需求,从Qbit数量和需要学习的数据数量方面加以分析。根据传统机学的最新结果,我们证明线性量子模型必须使用指数性比数据再加载模型多得多的指数,以便解决某些学习任务,而内核方法则需要指数化更多数据点。我们的结果提供了更全面的量子机器学习模型观点,以及不同模型的兼容性与NISQ限制的洞察力。