With near-term quantum devices available and the race for fault-tolerant quantum computers in full swing, researchers became interested in the question of what happens if we replace a machine learning model with a quantum circuit. While such "quantum models" are sometimes called "quantum neural networks", it has been repeatedly noted that their mathematical structure is actually much more closely related to kernel methods: they analyse data in high-dimensional Hilbert spaces to which we only have access through inner products revealed by measurements. This technical manuscript summarises, formalises and extends the link by systematically rephrasing quantum models as a kernel method. It shows that most near-term and fault-tolerant quantum models can be replaced by a general support vector machine whose kernel computes distances between data-encoding quantum states. In particular, kernel-based training is guaranteed to find better or equally good quantum models than variational circuit training. Overall, the kernel perspective of quantum machine learning tells us that the way that data is encoded into quantum states is the main ingredient that can potentially set quantum models apart from classical machine learning models.
翻译:随着近距离量子装置的可用,以及使用抗故障量子计算机的竞赛,研究人员开始对如果我们用量子电路取代机器学习模型时会发生什么问题感兴趣。虽然这种“量子模型”有时被称为“量子神经网络”,但人们一再指出,它们的数学结构实际上与内核方法的关系要密切得多:他们分析高维希尔伯特空间的数据,而我们只能通过测量所揭示的内产产品获得这些数据。这个技术手稿总结、正式化和通过系统地将量子模型作为内核方法进行重新配对来扩展链接。它表明,大多数近期和耐故障量子模型都可以由一般支持性矢量器取代,而后者的内核内核在数据分解量子状态之间的距离是分解的。特别是,以内核为基础的培训保证能找到更好或同样好的量子模型,而不是变化性电路培训。总体而言,量子机器学习的内核视角告诉我们,将数据编码成量子状态是有可能将量子模型与古典机器学习模型分开的主要成分。