The core of quantum machine learning is to devise quantum models with good trainability and low generalization error bound than their classical counterparts to ensure better reliability and interpretability. Recent studies confirmed that quantum neural networks (QNNs) have the ability to achieve this goal on specific datasets. With this regard, it is of great importance to understand whether these advantages are still preserved on real-world tasks. Through systematic numerical experiments, we empirically observe that current QNNs fail to provide any benefit over classical learning models. Concretely, our results deliver two key messages. First, QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets. Second, the trainability of QNNs is insensitive to regularization techniques, which sharply contrasts with the classical scenario. These empirical results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
翻译:量子机器学习的核心是设计量子模型,其培训能力优于其古典对口单位,低一般化错误,以确保更好的可靠性和可解释性。最近的研究证实,量子神经网络有能力在具体数据集方面实现这一目标。在这方面,了解这些优势是否仍然保留在现实世界的任务中非常重要。通过系统化的数值实验,我们从经验上观察到,目前的量子模型不能为古典学习模型提供任何好处。具体地说,我们的结果提供了两个关键信息。首先,QNNN受到极为有限的有效模型能力的影响,在现实世界数据集中,QNN缺乏一般化能力。第二,QNN对正规化技术的训练不敏感,而正规化技术与古典情景截然相反。这些经验结果迫使我们重新思考当前QNN的作用,并设计新的协议,用量子优势解决现实世界问题。