The intrinsic probabilistic nature of quantum mechanics invokes endeavors of designing quantum generative learning models (QGLMs) with computational advantages over classical ones. To date, two prototypical QGLMs are quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs), which approximate the target distribution in explicit and implicit ways, respectively. Despite the empirical achievements, the fundamental theory of these models remains largely obscure. To narrow this knowledge gap, here we explore the learnability of QCBMs and QGANs from the perspective of generalization when their loss is specified to be the maximum mean discrepancy. Particularly, we first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution and the quantum kernels are employed. Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states. This bound can be further employed to seek potential quantum advantages in Hamiltonian learning tasks. Numerical results of QGLMs in approximating quantum states, Gaussian distribution, and ground states of parameterized Hamiltonians accord with the theoretical analysis. Our work opens the avenue for quantitatively understanding the power of quantum generative learning models.
翻译:量子力学的内在概率性要求设计量子基因化学习模型(QGLMs),这种模型比古典模型具有计算优势。迄今为止,有两个原型QGLMs是量子电路原生机器(QBIS)和量子基因对抗网络(QGANs),它们分别以明确和隐含的方式接近目标分布。尽管取得了经验上的成就,但这些模型的基本理论仍然基本上模糊不清。为了缩小这一知识差距,我们在这里从概括化的角度探索QBIS和QGANs的可学习性,当损失被指定为最大平均差异时,我们首先分析QGLMS的普及能力,并在量子力装置能够直接进入目标分布和使用量子核心时,确定它们的优越性。接下来,我们证明QGANs的总误差是如何取决于所使用的Ansazz、水量和输入状态。这个界限可以进一步用于在汉密尔顿学习任务中寻找潜在的量子优势。我们首先分析QGLMS的普及性能力能力,然后分析QLMs的QLMs的理论性结果,我们对质化的基质化的基质模型的理论化分析,我们的数据级数级模型的理论级分析开始状态。