In recent proposals of quantum circuit models for generative tasks, the discussion about their performance has been limited to their ability to reproduce a known target distribution. For example, expressive model families such as Quantum Circuit Born Machines (QCBMs) have been almost entirely evaluated on their capability to learn a given target distribution with high accuracy. While this aspect may be ideal for some tasks, it limits the scope of a generative model's assessment to its ability to memorize data rather than generalize. As a result, there has been little understanding of a model's generalization performance and the relation between such capability and the resource requirements, e.g., the circuit depth and the amount of training data. In this work, we leverage upon a recently proposed generalization evaluation framework to begin addressing this knowledge gap. We first investigate the QCBM's learning process of a cardinality-constrained distribution and see an increase in generalization performance while increasing the circuit depth. In the 12-qubit example presented here, we observe that with as few as 30% of the valid patterns as the training set, the QCBM exhibits the best generalization performance toward generating unseen and valid patterns. Lastly, we assess the QCBM's ability to generalize not only to valid features, but to high-quality bitstrings distributed according to an adequately biased distribution. We see that the QCBM is able to effectively learn the bias and generate unseen samples with higher quality than those in the training set. To the best of our knowledge, this is the first work in the literature that presents the QCBM's generalization performance as an integral evaluation metric for quantum generative models, and demonstrates the QCBM's ability to generalize to high-quality, desired novel samples.
翻译:在最近为基因化任务提出的量子电路模型建议中,有关其性能的讨论一直局限于复制已知目标分布的能力,例如,Qaintum Cirm Rirm Machines(QBS)等表达式模型家庭几乎完全评价其以高精确度学习特定目标分布的能力。虽然这一方面对于某些任务来说可能是理想的,但是它限制了基因化模型评估的范围,使之限于其记忆数据的能力,而不是概括化。因此,对于模型的简单化性能以及这种能力与资源需求之间的关系,例如,电路深度和培训数据的数量。在这项工作中,我们利用最近提出的一般化评价框架来开始解决这一知识差距。我们首先调查QCBM的偏重性分布学习过程,发现在提高电路深度的同时,一般化模型的广度评估范围,我们发现,在培训设置的正确性能中只有30%,而QBBM的精度模型展示了最佳的精度质量,而在质量评估中,我们只展示了C的精度能力,我们从普通的精确度分析到普通的分布。最后,我们只了解了一般BM的工作,我们只了解了一般的准确性分布。