We study the complexity of training classical and quantum machine learning (ML) models for predicting outcomes of physical experiments. The experiments depend on an input parameter $x$ and involve the execution of a (possibly unknown) quantum process $\mathcal{E}$. Our figure of merit is the number of runs of $\mathcal{E}$ during training, disregarding other measures of runtime. A classical ML model performs a measurement and records the classical outcome after each run of $\mathcal{E}$, while a quantum ML model can access $\mathcal{E}$ coherently to acquire quantum data; the classical or quantum data is then used to predict outcomes of future experiments. We prove that, for any input distribution $\mathcal{D}(x)$, a classical ML model can provide accurate predictions on average by accessing $\mathcal{E}$ a number of times comparable to the optimal quantum ML model. In contrast, for achieving accurate prediction on all inputs, we show that exponential quantum advantage is possible for certain tasks. For example, to predict expectation values of all Pauli observables in an $n$-qubit system $\rho$, we present a quantum ML model using only $\mathcal{O}(n)$ copies of $\rho$ and prove that classical ML models require $2^{\Omega(n)}$ copies.
翻译:我们研究了培训古典和量子机器学习(ML)模型的复杂性,以预测物理实验的结果。实验依赖于一个输入参数$x$,并涉及执行一个(可能未知的)量子过程$$\mathcal{E}美元。我们的优点数字是培训期间运行的运行量$mathcal{E}美元的数量,忽略了其他运行时间的计量。一个古典ML模型在每次运行$\mathcal{E}美元之后进行测量和记录经典结果,而一个量子ML模型可以一致地获取$mathcal{{E}美元,以获得量子数据;然后,古典或量子数据被用来预测未来实验的结果。我们证明,对于任何投入分配量子过程,对于任何投入分配量子运行量子数量的数量数量的数量数量数量数量,只要使用$mmm/ML美元,就能提供与最佳量子模型相比的数倍数。相比之下,为了对所有投入的准确预测,我们只能对某些任务进行指数量量优势。例如,用美元来预测所有保罗/美元的量子的量值,我们需要的美元的量制。