The intrinsic probabilistic nature of quantum mechanics invokes endeavors of designing quantum generative learning models (QGLMs). Despite the empirical achievements, the foundations and the potential advantages of QGLMs remain largely obscure. To narrow this knowledge gap, here we explore the generalization property of QGLMs, the capability to extend the model from learned to unknown data. We consider two prototypical QGLMs, quantum circuit Born machines and quantum generative adversarial networks, and explicitly give their generalization bounds. The result identifies superiorities of QGLMs over classical methods when quantum devices can directly access the target distribution and quantum kernels are employed. We further employ these generalization bounds to exhibit potential advantages in quantum state preparation and Hamiltonian learning. Numerical results of QGLMs in loading Gaussian distribution and estimating ground states of parameterized Hamiltonians accord with the theoretical analysis. Our work opens the avenue for quantitatively understanding the power of quantum generative learning models.
翻译:量子力学的内在概率性利用了设计量子基因化学习模型(QGLM)的努力。尽管取得了经验性成就,但QGLM的基础和潜在优势仍然基本上模糊不清。为了缩小这一知识差距,我们在这里探索QGLM的通用属性,即将模型从所学数据扩展至未知数据的能力。我们考虑了两种原型QGLM、量子电路原生机器和量子基因化对抗网络,并明确给出了它们的概括性界限。结果确定了当量子装置能够直接进入目标分布和量子内核时,QGLMS优于古典方法。我们进一步利用这些一般化界限来展示量子状态准备和汉密尔顿学习的潜在优势。在装载高斯分布和估算参数化汉密尔顿人的地面状态时,QGLMs的数值结果与理论分析一致。我们的工作开辟了从数量上理解量子基因化学习模型的力量的渠道。