Quantum machine learning (QML) models based on parameterized quantum circuits are often highlighted as candidates for quantum computing's near-term "killer application". However, the understanding of the empirical and generalization performance of these models is still in its infancy. In this paper, we study how to balance between training accuracy and generalization performance (also called structural risk minimization) for two prominent QML models introduced by Havl\'i\v{c}ek et al. (Nature, 2019), and Schuld and Killoran (PRL, 2019). Firstly, using relationships to well-understood classical models, we prove that two model parameters -- i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model -- closely control the models' complexity and therefore its generalization performance. Secondly, using ideas inspired by process tomography, we prove that these model parameters also closely control the models' ability to capture correlations in sets of training examples. In summary, our results give rise to new options for structural risk minimization for QML models.
翻译:以参数化量子电路为基础的量子机学习模型(QML)往往被强调为量子计算近期“杀手应用”的候选模型。然而,对这些模型的经验和一般性能的理解仍处于初级阶段。在本文件中,我们研究如何平衡Havl\'i\v{c}ek等人(Nature, 2019年)和Schuld和Killoran(PRL,2019年)引进的两个突出的量子计算模型(QML)的精确性能和一般性能(也称为“结构性风险最小化”)的培训性能(也称为“结构性风险最小化 ” )。首先,我们利用与深层古典模型的关系,证明两个模型的模型参数 -- -- 即图像和模型所观察到的Frobenius规范 -- -- 密切控制模型的复杂性,从而控制其一般化性性能。第二,我们利用过程的启发,证明这些模型参数也密切控制了模型在成套培训实例中获取相关性的能力。简言之,我们的结果为QML模型的结构风险最小化提供了新的选择。