Quantum Neural Networks (QNNs), or the so-called variational quantum circuits, are important quantum applications both because of their similar promises as classical neural networks and because of the feasibility of their implementation on near-term intermediate-size noisy quantum machines (NISQ). However, the training task of QNNs is challenging and much less understood. We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training. Specifically, we show for typical under-parameterized QNNs, there exists a dataset that induces a loss function with the number of spurious local minima depending exponentially on the number of parameters. Moreover, we show the optimality of our construction by providing an almost matching upper bound on such dependence. While local minima in classical neural networks are due to non-linear activations, in quantum neural networks local minima appear as a result of the quantum interference phenomenon. Finally, we empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based optimizers, which demonstrates the practical value of our findings.
翻译:量子神经网络(QNNs)或所谓的变量量子电路(QNNs)是重要的量子应用,因为它们与古典神经网络有着相似的许诺,也因为它们在近期中型噪音量子机器(NISQ)上实施的可行性。然而,QNNs的培训任务具有挑战性,而且远不为人所知。我们对QNs损失功能的景观进行了定量调查,并确定了一组简单但极其困难的QNN标准培训实例。具体地说,我们为典型的单数不足的QNN标准展示了一个数据集,该数据集根据参数的数量以指数指数计算,在本地虚伪微型微粒的数量中产生一种损失功能。此外,我们通过提供几乎与这种依赖性相匹配的顶层,展示了我们的构造的最佳性。虽然在古典神经网络中的当地微型微型由于非线性激活,但在量子神经网络中,当地微型微量子被显示为量子干扰现象的结果。最后,我们从经验上证实,我们的构造在典型的梯度优化器中确实可能是困难的实例,这显示了我们发现我们发现我们发现的实际价值。