In recent years, quantum machine learning (QML) has been actively used for various tasks, e.g., classification, reinforcement learning, and adversarial learning. However, these QML studies do not achieve complex tasks due to scalability issues on input and output are the biggest hurdle in QML. To cope with this problem, we aim to solve the output scalability issue. Motivated by this challenge, we focus on projection-valued measure (PVM) which utilizes the nature of probability amplitude in quantum statistical mechanics. By leveraging PVM, the output dimension is expanded from the number of qubits $q$ to $\mathcal{O}(2^q)$. We propose a novel QML framework for multi-class classification. We corroborate that our framework outperforms the state-of-theart (SOTA) with various datasets using no more than 6 qubits. Furthermore, our PVM-based QML outperforms 42.2% SOTA.
翻译:近年来,量子机器学习(QML)被积极用于各种任务,例如分类、强化学习和对抗性学习。然而,这些QML研究由于投入和产出的可缩放问题而没有完成复杂的任务,这是QML的最大障碍。为了解决这个问题,我们的目标是解决产出缩放问题。我们受这一挑战的驱动,我们把重点放在利用量子统计力中概率振荡的性质的预测-价值计量(PVM)上。通过利用PVM,产出的层面从Qubits $q QQQQQQML=$\mathcal{O}(2QQQq) 增加到$\mathcal{O}(2QQQq) 。我们提出一个新的多级分类的QML框架。我们证实我们的框架超越了艺术(SOTA)的状态,使用不超过6夸比特的各种数据集。此外,我们的PVM的QML(QML)超过42.2% SOTA。