Generalization bounds are a critical tool to assess the training data requirements of Quantum Machine Learning (QML). Recent work has established guarantees for in-distribution generalization of quantum neural networks (QNNs), where training and testing data are assumed to be drawn from the same data distribution. However, there are currently no results on out-of-distribution generalization in QML, where we require a trained model to perform well even on data drawn from a distribution different from the training distribution. In this work, we prove out-of-distribution generalization for the task of learning an unknown unitary using a QNN and for a broad class of training and testing distributions. In particular, we show that one can learn the action of a unitary on entangled states using only product state training data. We numerically illustrate this by showing that the evolution of a Heisenberg spin chain can be learned using only product training states. Since product states can be prepared using only single-qubit gates, this advances the prospects of learning quantum dynamics using near term quantum computers and quantum experiments, and further opens up new methods for both the classical and quantum compilation of quantum circuits.
翻译:一般化界限是评估量子机器学习(QML)培训数据要求的关键工具。最近的工作为量子神经网络(QNNs)的分布性一般化提供了保证,假设从同一数据分布中得出培训和测试数据。然而,目前没有关于QML中分配外一般化的结果,我们需要一个经过培训的模型,以在与培训分布不同的分配数据上很好地发挥作用。在这项工作中,我们证明利用QNN(QNN)学习一个未知的单一体以及广泛的培训和测试分布等任务在分配上已经超出了一般化。特别是,我们表明,人们可以只用产品状态的培训数据在被缠绕的国家学习一个单一体的动作。我们用数字来说明这一点,表明海森堡螺旋链的演进只能用产品培训状态来学习。由于产品状态只能用单子门来准备,因此,我们只能利用近期的量子计算机和量子实验来学习量子动态,进一步开辟了学习量子动力的前景,并且为量子电路的古典和量子汇编打开新方法。