The emergence of Quantum Machine Learning (QML) to enhance traditional classical learning methods has seen various limitations to its realisation. There is therefore an imperative to develop quantum models with unique model hypotheses to attain expressional and computational advantage. In this work we extend the linear quantum support vector machine (QSVM) with kernel function computed through quantum kernel estimation (QKE), to form a decision tree classifier constructed from a decision directed acyclic graph of QSVM nodes - the ensemble of which we term the quantum random forest (QRF). To limit overfitting, we further extend the model to employ a low-rank Nystr\"{o}m approximation to the kernel matrix. We provide generalisation error bounds on the model and theoretical guarantees to limit errors due to finite sampling on the Nystr\"{o}m-QKE strategy. In doing so, we show that we can achieve lower sampling complexity when compared to QKE. We numerically illustrate the effect of varying model hyperparameters and finally demonstrate that the QRF is able obtain superior performance over QSVMs, while also requiring fewer kernel estimations.
翻译:为提高传统传统学习方法而推出的量子机器学习(QML)的出现对其实现产生了各种限制。 因此,必须开发具有独特模型假设的量子模型,以获得表达和计算优势。 在这项工作中,我们扩展了通过量子内核估计(QKE)计算内核函数的线性量子支持矢量媒介机器(QSVM),以形成一个决策树分类器,该树分类器是根据QSVM节点(我们称之为量子随机森林(QRF)的组合)的循环图做出的决定而成的。为了限制过度配置,我们进一步扩展了该模型,以使用低级Nystr\"{o}近似内核矩阵。我们提供了模型上的概括错误和理论保证,以限制由于Nystrar\"{o}m-QKE战略的有限取样而导致的错误。 我们这样做表明,与QKE相比,我们能够实现较低的取样复杂性。 我们用数字来说明不同模型超立度的效应,并最后证明QRF在需要更少的QSQMS-MS-Ms的精细微时, 。