Machine learning algorithms typically perform optimization over a class of non-convex functions. In this work, we provide bounds on the fundamental hardness of identifying the global minimizer of a non convex function. Specifically, we design a family of parametrized non-convex functions and employ statistical lower bounds for parameter estimation. We show that the parameter estimation problem is equivalent to the problem of function identification in the given family. We then claim that non convex optimization is at least as hard as function identification. Jointly, we prove that any first order method can take exponential time to converge to a global minimizer.
翻译:机器学习算法通常对非 convex 函数类别进行优化。 在此工作中, 我们给出了基本硬度的界限, 以辨别非 convex 函数的全球最小化器。 具体地说, 我们设计了一个配光化非 convex 函数的组合, 并使用统计上较低的下限来估计参数。 我们显示参数估计问题相当于给定家族的函数识别问题 。 然后我们声称非 convex 优化至少和函数识别一样困难 。 共同地, 我们证明任何第一个排序方法都可能需要指数化时间才能与一个全球最小化器汇合 。