Parameterized quantum circuits can be used as quantum neural networks and have the potential to outperform their classical counterparts when trained for addressing learning problems. To date, much of the results on their performance on practical problems are heuristic in nature. In particular, the convergence rate for the training of quantum neural networks is not fully understood. Here, we analyze the dynamics of gradient descent for the training error of a class of variational quantum machine learning models. We define wide quantum neural networks as parameterized quantum circuits in the limit of a large number of qubits and variational parameters. We then find a simple analytic formula that captures the average behavior of their loss function and discuss the consequences of our findings. For example, for random quantum circuits, we predict and characterize an exponential decay of the residual training error as a function of the parameters of the system. We finally validate our analytic results with numerical experiments.
翻译:参数化量子电路可用作量子神经网络,并有潜力在解决学习问题时优于它们的经典对应物。迄今为止,关于它们在实际问题上的性能的大部分结果都是启发式的。特别地,对于量子神经网络的训练收敛速率尚未完全理解。在这里,我们分析梯度下降动态,用于一类变分量子机器学习模型的训练误差。我们将参数化量子电路定义为大量的量子比特和变分参数的极限下的量子神经网络。然后,我们找到一个简单的解析公式,捕捉其损失函数的平均行为,并讨论了我们发现的结果的后果。例如,对于随机量子电路,我们预测并表征其残余训练误差随系统参数的指数衰减。最后,我们通过数值实验验证了我们的解析结果。