Quantum algorithms for optimization problems are of general interest. Despite recent progress in classical lower bounds for nonconvex optimization under different settings and quantum lower bounds for convex optimization, quantum lower bounds for nonconvex optimization are still widely open. In this paper, we conduct a systematic study of quantum query lower bounds on finding $\epsilon$-approximate stationary points of nonconvex functions, and we consider the following two important settings: 1) having access to $p$-th order derivatives; or 2) having access to stochastic gradients. The classical query lower bounds is $\Omega\big(\epsilon^{-\frac{1+p}{p}}\big)$ regarding the first setting, and $\Omega(\epsilon^{-4})$ regarding the second setting (or $\Omega(\epsilon^{-3})$ if the stochastic gradient function is mean-squared smooth). In this paper, we extend all these classical lower bounds to the quantum setting. They match the classical algorithmic results respectively, demonstrating that there is no quantum speedup for finding $\epsilon$-stationary points of nonconvex functions with $p$-th order derivative inputs or stochastic gradient inputs, whether with or without the mean-squared smoothness assumption. Technically, our quantum lower bounds are obtained by showing that the sequential nature of classical hard instances in all these settings also applies to quantum queries, preventing any quantum speedup other than revealing information of the stationary points sequentially.
翻译:优化问题的量子算法具有普遍意义。 尽管最近在不同设置和康韦克斯优化的量子下限中, 传统非康维克斯优化的典型下限值有所进步, 但对于非康维克斯优化的量子下限值仍然广泛开放。 在本文中, 我们系统研究在寻找 $\ epsilon$- 近似固定点的非康维克斯函数的量子查询下限值, 我们认为以下两个重要设置:(1) 能够访问 $p- x级衍生物; 或(2) 能够访问随机梯度。 古典查询下限值为$\ omega\ beg (\ epslã_\\\\\ frac{ 1+p ⁇ \\ p ⁇ big), 而对于第一个设置的量子值下限值, 量子查询量子调值的值较低值值值值值值值值值值值值值值, 也比普通的量值值值值值值值值值值的直值值直值。 这些直值的量级值的量级值值的值值值值的值值值值值值的值值值值值值值值的值值值值值的值值值值值值值值值值值值值与不比值的直值值的直值的直值的直值。