Solving the ground state and the ground-state properties of quantum many-body systems is generically a hard task for classical algorithms. For a family of Hamiltonians defined on an $m$-dimensional space of physical parameters, the ground state and its properties at an arbitrary parameter configuration can be predicted via a machine learning protocol up to a prescribed prediction error $\varepsilon$, provided that a sample set (of size $N$) of the states can be efficiently prepared and measured. In a recent work [Huang et al., Science 377, eabk3333 (2022)], a rigorous guarantee for such an generalization was proved. Unfortunately, an exponential scaling, $N = m^{ {\cal{O}} \left(\frac{1}{\varepsilon} \right) }$, was found to be universal for generic gapped Hamiltonians. This result applies to the situation where the dimension of the parameter space is large while the scaling with the accuracy is not an urgent factor, not entering the realm of more precise learning and prediction. In this work, we consider an alternative scenario, where $m$ is a finite, not necessarily large constant while the scaling with the prediction error becomes the central concern. By exploiting physical constraints and positive good kernels for predicting the density matrix, we rigorously obtain an exponentially improved sample complexity, $N = \mathrm{poly} \left(\varepsilon^{-1}, n, \log \frac{1}{\delta}\right)$, where $\mathrm{poly}$ denotes a polynomial function; $n$ is the number of qubits in the system, and ($1-\delta$) is the probability of success. Moreover, if restricted to learning ground-state properties with strong locality assumptions, the number of samples can be further reduced to $N = \mathrm{poly} \left(\varepsilon^{-1}, \log \frac{n}{\delta}\right)$. This provably rigorous result represents a significant improvement and an indispensable extension of the existing work.
翻译:解决量子多体系统的基态和基态特性通常对于经典算法来说是一项艰巨的任务。对于一个定义在$m$维物理参数空间上的哈密顿量族,只要能够高效地准备和测量一个状态的样本集(大小为N),就可以通过机器学习协议预测其在任意参数配置下的基态及其性质,假设在预测误差$\varepsilon$上存在一个预定的预测误差 $\varepsilon$。在最近的一项研究工作中[Huang et al.,Science 377,eabk3333(2022)],证明了此类概括的严密保证。不幸的是,对于通用的带隙哈密顿量,发现普遍的指数尺度$N=m^{ {\cal{O}} \left(\frac{1}{\varepsilon} \right) }$。这个结果适用于参数空间的维数很大,而准确度不是一个紧迫因素,不涉及更精确的学习和预测的情况。在本研究中,我们考虑了另一种情况,即$m$是一个有限的、不一定很大的常数,而预测误差的尺度成为中心关注的问题。通过利用物理约束和用于预测密度矩阵的正向好核函数,我们严格获得了指数级的样本复杂性,$N = \mathrm{poly} \left(\varepsilon^{-1}, n, \log \frac{1}{\delta}\right)$,其中$\mathrm{poly}$表示一个多项式函数;$n$是系统中的量子比特数,($1-\delta$)是成功的概率。此外,如果限制学习具有强局部性假设的基态性质,则样本数量可以进一步减少到$N = \mathrm{poly} \left(\varepsilon^{-1}, \log \frac{n}{\delta}\right)$。这个可证明的严格结果代表了现有工作的显着提高和不可或缺的扩展。