While over-parameterization is widely believed to be crucial for the success of optimization for the neural networks, most existing theories on over-parameterization do not fully explain the reason -- they either work in the Neural Tangent Kernel regime where neurons don't move much, or require an enormous number of neurons. In practice, when the data is generated using a teacher neural network, even mildly over-parameterized neural networks can achieve 0 loss and recover the directions of teacher neurons. In this paper we develop a local convergence theory for mildly over-parameterized two-layer neural net. We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an over-parameterized two-layer neural network will converge to one of teacher neurons, and the loss will go to 0. Our result holds for any number of student neurons as long as it is at least as large as the number of teacher neurons, and our convergence rate is independent of the number of student neurons. A key component of our analysis is the new characterization of local optimization landscape -- we show the gradient satisfies a special case of Lojasiewicz property which is different from local strong convexity or PL conditions used in previous work.
翻译:虽然人们广泛认为超光度测量对于神经网络的优化成功至关重要,但大多数现有的超光度测量理论并不能充分解释原因 -- -- 它们要么在神经神经不常移动的神经唐氏内核系统中工作,要么需要大量神经元。实际上,当数据使用教师神经网络生成时,即使轻微超光度的神经网络也可以实现0损失并恢复教师神经元的方向。在本文中,我们为轻度超度的两层神经网开发了一种本地趋同理论。我们的分析表明,只要损失已经低于临界值(相关参数中的皮质),那么在超光度的两层神经网络中的所有学生神经元都将与教师神经元集中在一起,而损失将达0。只要与教师神经元数量一样大,我们对任何学生神经元的集合率都是独立的。我们分析的关键部分是,只要损失已经低于临界值的临界值(Lojax)新特点,即对本地的硬度图像进行新的描述,我们使用的是以往的硬度变化。