Quantum machine learning (QML) models based on parameterized quantum circuits are often highlighted as candidates for quantum computing's near-term ``killer application''. However, the understanding of the empirical and generalization performance of these models is still in its infancy. In this paper we study how to balance between training accuracy and generalization performance (also called structural risk minimization) for two prominent QML models introduced by Havl\'{i}\v{c}ek et al. (Nature, 2019), and Schuld and Killoran (PRL, 2019). Firstly, using relationships to well understood classical models, we prove that two model parameters -- i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model -- closely control the models' complexity and therefore its generalization performance. Secondly, using ideas inspired by process tomography, we prove that these model parameters also closely control the models' ability to capture correlations in sets of training examples. In summary, our results give rise to new options for structural risk minimization for QML models.
翻译:以参数化量子电路为基础的量子机学习模型(QML)通常被作为量子计算近期“杀手应用”的候选模型。 但是,对这些模型的经验和一般性能的理解仍处于初级阶段。 在本文件中,我们研究如何平衡Havl\'{i{v{{c}ket et al.(Nature, 2019)和Schuld和Killoran (PRL,2019)所引入的两个突出的量子机学习模型(QML)的培训精度和一般性能(也称为“结构性风险最小化”)之间的平衡。首先,我们利用与广为理解的古典模型的关系,证明两个模型参数 -- -- 即图像总和模型所观察到的Frobenius规范 -- -- 密切控制模型的复杂性,从而控制其总体性性性能。第二,我们利用过程图学启发的想法,证明这些模型也密切控制了模型在成套培训实例中获取相关性的能力。简言之,我们的结果为QML模型的结构风险最小化提供了新的选择。