Variational quantum algorithms (VQAs) are among the most promising algorithms in the era of Noisy Intermediate Scale Quantum Devices. Such algorithms are constructed using a parameterization U($\pmb{\theta}$) with a classical optimizer that updates the parameters $\pmb{\theta}$ in order to minimize a cost function $C$. For this task, in general the gradient descent method, or one of its variants, is used. This is a method where the circuit parameters are updated iteratively using the cost function gradient. However, several works in the literature have shown that this method suffers from a phenomenon known as the Barren Plateaus (BP). In this work, we propose a new method to mitigate BPs. In general, the parameters $\pmb{\theta}$ used in the parameterization $U$ are randomly generated. In our method they are obtained from a classical neural network (CNN). We show that this method, besides to being able to mitigate BPs during startup, is also able to mitigate the effect of BPs during the VQA training. In addition, we also show how this method behaves for different CNN architectures.
翻译:变化量算算法(VQAs)是新流中等比例量子设备时代最有希望的算法之一。这种算法是用古典优化器来构建的,它更新了参数U($\pmb=theta}$),以最大限度地降低成本函数$C美元。对于这项任务,一般而言,使用梯度下降法或其变种之一。这是使用成本函数梯度对电路参数进行迭代更新的一种方法。然而,一些文献中的著作表明,这种方法存在被称为Barren Plateau(BBP)的现象。在这项工作中,我们提出了一种新的缓解 BPs 的方法。一般来说,参数化中使用的参数$\pmb=theta}$是随机生成的。在我们的方法中,这些参数是从古典神经网络(CNN)获得的。我们表明,除了能够在启动过程中减少BPs外,这一方法还能减轻BPs的作用。 在不同的培训过程中,我们也可以为显示BPs的行为方式。