Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functions. For obvious reasons, it is expected that gradient-based optimizers will be significantly affected by barren plateaus. However, whether or not gradient-free optimizers are impacted is a topic of debate, with some arguing that gradient-free approaches are unaffected by barren plateaus. Here we show that, indeed, gradient-free optimizers do not solve the barren plateau problem. Our main result proves that cost function differences, which are the basis for making decisions in a gradient-free optimization, are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization. We numerically confirm this by training in a barren plateau with several gradient-free optimizers (Nelder-Mead, Powell, and COBYLA algorithms), and show that the numbers of shots required in the optimization grows exponentially with the number of qubits.
翻译:巴伦高原地貌相当于在qubit数量中消失成倍的梯度。这些地貌已经展示为具有深电路或全球成本功能的量子量子算法和量子神经网络。由于明显的原因,预计基于梯度的优化器将受到不毛地高原的重大影响。然而,无梯度的优化器是否受到影响是一个争论的议题,有些人认为无梯度的优化器不受不毛地高原的影响。我们在这里表明,事实上,无梯度的优化器并不能解决不毛地高原问题。我们的主要结果证明,成本函数差异是无梯度优化决策的基础,在荒地高原中被快速抑制。因此,没有指数精确度的梯度,无梯度优化器不会在优化中取得进展。我们通过在荒地高地培训一些无梯度优化器(Nelder-Mead、Poward和COBYLA算法),并通过培训来从数字上证实这一点,并表明,优化所需的射击数量随着qubits的数量以指数增长。